Search Results

Search found 2446 results on 98 pages for 'darren green'.

Page 57/98 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • XNA calculate normals for linesegment

    - by Gerhman
    I am quite new to 3D graphical programming and thus far only understand that normal somehow define the direction in which a vertex faces and therefore the direction in which light is reflected. I have now idea how they are calculated though, only that they are defined by a Vector3. For a visualizer that I am creating I am importing a bunch of coordinate which represent layer upon layer of line segments. At the moment I am only using a vertex buffer and adding the start and end point of each line and then rendering a linelist. The thing is now that I need to calculate the normal for the vertices of these line segments so that I can get some realistic lighting. I have no idea how to calculate these normal but I know they all face sideways and not up or down. To calculate them all I have are the start and end positions of each line segment. The below image is a representation of what I think I need to do in the case of an example layer: The red arrows represent the normal that should be calculates, the blue text represent the coordinates of the vertices and the green numbers represent their indices. I would greatly appreciate it if someone could please explain to me how I should calculate these normal.

    Read the article

  • Do we have enough time to build an electric car future?

    - by julien.groues
    A recent article from Greenbang has posed the question 'Do we have enough time to build an electric car future?'. The writer discusses that, although the future of transport might lie with electric cars, there is concern regarding whether we'll be able to build the market and infrastructure required to support them, before carbon and oil constraints create difficulties in powering the vehicles. Of course, the increasing use of Electric vehicles (EVs) is going to put excessive pressure on energy grids, as large volumes of electricity will need to be directed to charging points, which in turn must handle fluctuating demand at peak times. EVs are increasing in popularity as a sustainable method of transport to reduce carbon consumption, and electric utilities will have the opportunity, and the challenge, to quickly determine the best methods to fuel these vehicles and accommodate the associated increases in demand for energy. Critically, efficient software is required to provide diagnostic and predictive capabilities related to EV refuelling - for example, anticipated electricity flow will need to be addressed as the number of EVs on the road increases, and electricity will need to be directed to specific areas on-demand as vehicles attempt to recharge en-mass. But a smart grid infrastructure can meet these demands, intelligently. The implementation of a smart grid is not in the distant future, it is an achievable reality for utilities via simple installation of new software and technologies, which can be done incrementally for those facing existing legacy systems or concerned with upfront costs. The smart grid is integral to the monitoring and control of energy use as well as the future-proofing of the energy grid. A smart grid will be critical to meeting the electricity requirements of new EVs and will ensure their successful deployment by providing a reliable foundation for the data handling required to record and manage electricity distribution - from recording and assessing energy usage, to analysing data and sharing information with consumers via green billing. http://www.greenbang.com/do-we-have-enough-time-to-build-an-electric-car-future_14248.html

    Read the article

  • Prevent Looping and Inefficient Rule Executions by C2B2

    - by JuergenKress
    This recipe, taken from the recently published Oracle SOA Suite 11g Performance Cookbook gives guidance on how to avoid rule executions that will loop, potentially indefinitely! We’ll use an inbound XML fact and a local RL fact as an example. Getting ready You’ll need access to a SOA composite containing an Oracle Business Rules component in JDeveloper to apply this recipe. We’ll assume you have an XSD schema with an input type RequestInput containing input and bonus String types, and output String value called output in a type ResponseOutput. These aren’t efficient but serve as an example. We’ll step through adding a rule to a composite and creating an RL fact. How to do it... Open a SOA composite. Right click on the Project and select Business Rules (Service Components), use the search box if it is not immediately available. Give the rule a name and click the green plus icon to add the RequestInput to the input and ResponseOutput to the output types. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,looping,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Why the R# Method Group Refactoring is Evil

    - by Liam McLennan
    The refactoring I’m talking about is recommended by resharper when it sees a lambda that consists entirely of a method call that is passed the object that is the parameter to the lambda. Here is an example: public class IWishIWasAScriptingLanguage { public void SoIWouldntNeedAllThisJunk() { (new List<int> {1, 2, 3, 4}).Select(n => IsEven(n)); } private bool IsEven(int number) { return number%2 == 0; } } When resharper gets to n => IsEven(n) it underlines the lambda with a green squiggly telling me that the code can be replaced with a method group. If I apply the refactoring the code becomes: public class IWishIWasAScriptingLanguage { public void SoIWouldntNeedAllThisJunk() { (new List<int> {1, 2, 3, 4}).Select(IsEven); } private bool IsEven(int number) { return number%2 == 0; } } The method group syntax implies that the lambda’s parameter is the same as the IsEven method’s parameter. So a readable, explicit syntax has been replaced with an obfuscated, implicit syntax. That is why the method group refactoring is evil.

    Read the article

  • How to proceed on the waypoint path?

    - by Alpha Carinae
    I'm using Dijkstra algorithm to find shortest path and I'm drawing this path on the screen. As the character object moves on, path updates itself(shortens as the object approaches the target and gets longer as the object moves away from it.) I tried to visualize my problem. This is the beginning state. 'A' node is the target, path is the blue and the object is the green one. I draw this path, from object to the closest node. In this case my problem occurs. Because 'D' node is more closer to the object than 'C' node, something like this happens: So, how can i decide that the object passed the 'D' node? Path should be look like this: One thing comes to my mind is that I use some distance variables between the two closest nodes in the route path. (In this example these are 'C' and 'D' nodes.) As the object approaches 'C' and moves away from the 'D' node at the same time, this means character passed the 'D'. However, I think there are some standardized and easy ways to solve this. What approach should I take?

    Read the article

  • How to make Box2D bodies automatically return to a initial rotation

    - by sm4
    I have two long Box2D bodies, that can collide while moving one of them around with MouseJoint. I want them to try to hold their position and rotation. Blue body is moved using MouseJoint (yellow) towards the Red body. Red body has another MouseJoint - Blue can push Red, but Red will try to return to the start point thanks to the MouseJoint - this works just fine. Both bodies correctly rotate along the middle. This is still as I want. I change the MouseJoint to move the Blue away. What I need is both bodies return to their initial rotation (green arrows) Desired positions and rotations Is there anything in Box2D that could do this automatically? The MouseJoint does that nicely for position. I need it in AndEngine (Java, Android) port, but any Box2D solution is fine. EDIT: By automatically I mean having something I can add to the object "Paddle" without the need to change game loop. I want to encapsulate this functionality to the object itself. I already have an object Paddle that has its own UpdateHandler which is being called from the game loop. What would be much nicer is to attach some kind of "spring" joint to both left and right sides of the paddle that would automatically level the paddle. I will be exploring this option soon.

    Read the article

  • How to procedurally (create) grow an artistic (2D) tree in real-time (L-System?).

    - by lalan
    Recently I programmed an L-system module, It got me interested further. I am a Plants vs Zombies junkie as well, really liked the concept of Tree of Wisdom. Would love to create similar procedural art just for fun and learn more. Question: How should I approach the process of creating an artistic tree (2d perhaps with fixed camera/perspective) dynamically? Ideally I would like to start with a plant (only a stem with a leaf) and grow it dynamically using some influence (input/user action) over its structure. These influences may result in different type of branching, curves in branches, its spread, location of fruits, color of flowers, etc. Want it to be really full of life/spirit. :) Plants vs Zombies: Tree of wisdom It would be great to dynamically grow a similar tree, but with lot more variation and animations happening. My Background: Student / Programmer, have used few game engines (Ogre3d, cocos2d, unity). Haven't really programmed directly using openGL, trying to fix that :). I am ready to spend considerable time, Please let me know about the APIs? and how would an expert like you would take on this problem? Why 2D? I think it's easier to solve the problem only considering 2 dimensions. Artistic inspirations: Only the tree, with fruits and leaves, without the shrubs at the bottom The large tree (visible branches, green leaves, flowers, fruits, etc) on the left, behind monkey. PixelJunk's Eden (Art style inspiration). Procedurally Generated Apple Tree using Fractals Please let me know if it was easy for you to understand the question, I may elaborate further. I hope a discussion of various approach would be helpful for everyone. You guys are awesome.

    Read the article

  • Bounding volume hierarchy - linked nodes (linear model)

    - by teodron
    The scenario A chain of points: (Pi)i=0,N where Pi is linked to its direct neighbours (Pi-1 and Pi+1). The goal: perform efficient collision detection between any two, non-adjacent links: (PiPi+1) vs. (PjPj+1). The question: it's highly recommended in all works treating this subject of collision detection to use a broad phase and to implement it via a bounding volume hierarchy. For a chain made out of Pi nodes, it can look like this: I imagine the big blue sphere to contain all links, the green half of them, the reds a quarter and so on (the picture is not accurate, but it's there to help understand the question). What I do not understand is: How can such a hierarchy speed up computations between segments collision pairs if one has to update it for a deformable linear object such as a chain/wire/etc. each frame? More clearly, what is the actual principle of collision detection broad phases in this particular case/ how can it work when the actual computation of bounding spheres is in itself a time consuming task and has to be done (since the geometry changes) in each frame update? I think I am missing a key point - if we look at the picture where the chain is in a spiral pose, we see that most spheres are already contained within half of others or do intersect them.. it's odd if this is the way it should work.

    Read the article

  • cocos2d/OpenGL multitexturing problem

    - by Gajoo
    I've got a simple shader to test multitextureing the problem is both samplers are using same image as their reference. the shader code is basically just this : vec4 mid = texture2D(u_texture,v_texCoord); float g = texture2D(u_guide,v_guideCoord); gl_FragColor = vec4(g , mid.g,0,1); and this is how I'm calling draw function : int last_State; glGetIntegerv(GL_ACTIVE_TEXTURE, &last_State); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, getTexture()->getName()); glActiveTexture(GL_TEXTURE1); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, mGuideTexture->getName()); ccGLEnableVertexAttribs( kCCVertexAttribFlag_TexCoords |kCCVertexAttribFlag_Position); glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, vertices); glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, 0, texCoord); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisable(GL_TEXTURE_2D); I've already check mGuideTexture->getName() and getTexture()->getName() are returning correct textures. but looking at the result I can tell, both samplers are reading from getTexture()->getName(). here are some screen shots showing what is happening : The image rendered Using above codes The image rendered when I change textures passed to samples I'm expecting to see green objects from the first picture with red objects hanging from the top.

    Read the article

  • How does a CS student negotiate in/after a job interview?

    - by Billy ONeal
    Alright, I've gotten to the second step in the interview process. At this point I'm working under the assumption that I might be offered a position -- flying my butt to Redmond would be quite an expense if they weren't at least considering me for something (*crosses fingers*). So, if one is offered a position, how should a CS student negotiate? I've heard a few strategies about dealing with software companies when you are being considered for a hire, but most of them are considering the developer in a powerful position. In such examinations, (s)he has lots of job experience, and may even be overqualified for what the employer is looking for. (s)he is part of a small job market of qualified developers, because 99% of applications companies receive are from those who are woefully under qualified. I'm in a completely different position. I think I compare favorably to most of my fellow students, and I have been a programmer for almost 10 years, but often I still feel green compared to most of my coworkers. I'm in a position where the employer holds most of the chips; they'd be doing me quite a favor by hiring me. I think this scenario is considerably different than the targets for most of the advice I've seen. Above all, I don't want to be such a prick negotiating that it damages my chances to actually operate in a position, even if it means not negotiating at all. How should one approach a scenario like this? P.S. If this is off topic feel free to close it -- I think it's borderline and I'm of the opinion that it's better to ask and be closed than not ask at all ;)

    Read the article

  • Adjust sprite bounds of the visible part of texture

    - by Crazy D0G
    Is there any way to adjust the boundaries of the visible part of the sprite? To make it easier to understand: I have a texture, such as shown at figure 1. Then I break it into pieces and fill the resulting fragments using PRKit (wood texture on figure 2 and 3). But the resulting fragments have the transparent (green color on figure 2 and 3) and when creating a sprite from the fragments they have the size of the initial texture. Is there a way to get rid of this transparency and to adjust the size of the visible part (wood texture), openGL or cocos2d-x means? Maybe it help - draw() method from PRKit: void PRFilledPolygon::draw() { //CCNode::draw(); glDisableClientState(GL_COLOR_ARRAY); // we have a pointer to vertex points so enable client state glBindTexture(GL_TEXTURE_2D, texture->getName()); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ONE_MINUS_SRC_ALPHA); glVertexPointer(2, GL_FLOAT, 0, areaTrianglePoints); glTexCoordPointer(2, GL_FLOAT, 0, textureCoordinates); glDrawArrays(GL_TRIANGLES, 0, areaTrianglePointCount); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); //Restore texture matrix and switch back to modelview matrix glEnableClientState(GL_COLOR_ARRAY);}

    Read the article

  • Assigning valid moves on board game

    - by Kunal4536
    I am making a board game in unity 4.3 2d similar to checkers. I have added an empty object to all the points where player can move and added a box collider to each empty object.I attached a click to move script to each player token. Now I want to assign valid moves. e.g. as shown in picture... Players can only move on vertex of each square.Player can only move to adjacent vertex.Thus it can only move from red spot to yellow and cannot move to blue spot.There is another condition which is : if there is the token of another player at the yellow spot then the player cannot move to that spot. Instead it will have to go from red to green spot. How can I find the valid moves of the player by scripting. I have another problem with click to move. When I click all the objects move to that position.But I only want to move a single token. So what can i add to script to select a specific object and then click to move the specific object.Here is my script for click to move. var obj:Transform; private var hitPoint : Vector3; private var move: boolean = false; private var startTime:float; var speed = 1; function Update () { if(Input.GetKeyDown(KeyCode.Mouse0)) { var hit : RaycastHit; // no point storing this really var ray = Camera.main.ScreenPointToRay (Input.mousePosition); if (Physics.Raycast (ray, hit, 10000)) { hitPoint = hit.point; move = true; startTime = Time.time; } } if(move) { obj.position = Vector3.Lerp(obj.position, hitPoint, Time.deltaTime * speed); if(obj.position == hitPoint) { move = false; } } }`

    Read the article

  • AndEngine Box2d game

    - by OneMoreVladimir
    I'm developing a 2d survival shooter using Box2d extension and I've got some questions: I have two AnalogOnScreenControls. Their listeners modify both sprites and bodies. I receive TouchEventPool was exhausted and as their number grows the game crashes accidently. I've tried to put the modification of the bodies and sprites on the UpdateThread but that does not solve the problem. What could be the cause? I have a class that at the beginnig of the game loads all the textures. After I relaunch the game activity several times I receive Unable to find Phys Addr for and "green color" interface. But that doesn't happen if I clear the memory manually through the Task Manager before relaunch What could be the cause? I unload my atlas at the end of the game. The game sometimes crashes at start with NullPointerException in onResumeGame. The solution suggested is to set android:configChanges="orientation|screenSize" but my device is API 10 so it doesn't have screenSize property and orientation only does not seem to help, because the game starts in portrait mode at times (though landscape is set in the code)

    Read the article

  • Wrong faces culled in OpenGL when drawing a rectangular prism

    - by BadSniper
    I'm trying to learn opengl. I did some code for building a rectangular prism. I don't want to draw back faces so I used glCullFace(GL_BACK), glEnable(GL_CULL_FACE);. But I keep getting back faces also when viewing from front and also sometimes when rotating sides are vanishing. Can someone point me in right direction? glPolygonMode(GL_FRONT,GL_LINE); // draw wireframe polygons glColor3f(0,1,0); // set color green glCullFace(GL_BACK); // don't draw back faces glEnable(GL_CULL_FACE); // don't draw back faces glTranslatef(-10, 1, 0); // position glBegin(GL_QUADS); // face 1 glVertex3f(0,-1,0); glVertex3f(0,-1,2); glVertex3f(2,-1,2); glVertex3f(2,-1,0); // face 2 glVertex3f(2,-1,2); glVertex3f(2,-1,0); glVertex3f(2,5,0); glVertex3f(2,5,2); // face 3 glVertex3f(0,5,0); glVertex3f(0,5,2); glVertex3f(2,5,2); glVertex3f(2,5,0); // face 4 glVertex3f(0,-1,2); glVertex3f(2,-1,2); glVertex3f(2,5,2); glVertex3f(0,5,2); // face 5 glVertex3f(0,-1,2); glVertex3f(0,-1,0); glVertex3f(0,5,0); glVertex3f(0,5,2); // face 6 glVertex3f(0,-1,0); glVertex3f(2,-1,0); glVertex3f(2,5,0); glVertex3f(0,5,0); glEnd();

    Read the article

  • my web cam not working prperly in skype 12.04

    - by elvin
    oller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #3 (rev 02) 00:1d.3 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #4 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c2) 00:1f.0 ISA bridge: Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE Controller (rev 02) 00:1f.2 IDE interface: Intel Corporation 82801EB (ICH5) SATA Controller (rev 02) 00:1f.3 SMBus: Intel Corporation 82801EB/ER (ICH5/ICH5R) SMBus Controller (rev 02) 00:1f.5 Multimedia audio controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) AC'97 Audio Controller (rev 02) 01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 01:08.0 Ethernet controller: Intel Corporation 82562EZ 10/100 Ethernet Controller (rev 01) This is the configuration of my web cam when i typed lspci in terminal; how do i find out what type of cam it is. the command letters i wrote here may be wrong.never mind i wrote from my memory.my web cam shows green colour shade and no proper colour.pls help me to fix this problem.2nd option- if i buy microsoft web cam will the image will get corrected.i think my web cam is hyundai electronics make.

    Read the article

  • How to refactor when all your development is on branches?

    - by Mark
    At my company, all of our development (bug fixes and new features) is done on separate branches. When it's complete, we send it off to QA who tests it on that branch, and when they give us the green light, we merge it into our main branch. This could take anywhere between a day and a year. If we try to squeeze any refactoring in on a branch, we don't know how long it will be "out" for, so it can cause many conflicts when it's merged back in. For example, let's say I want to rename a function because the feature I'm working on is making heavy use of this function, and I found that it's name doesn't really fit its purpose (again, this is just an example). So I go around and find every usage of this function, and rename them all to its new name, and everything works perfectly, so I send it off to QA. Meanwhile, new development is happening, and my renamed function doesn't exist on any of the branches that are being forked off main. When my issue gets merged back in, they're all going to break. Is there any way of dealing with this? It's not like management will ever approve a refactor-only issue so it has to be squeezed in with other work. It can't be developed directly on main because all changes have to go through QA and no one wants to be the jerk that broke main so that he could do a little bit of non-essential refactoring.

    Read the article

  • External monitor problems with Asus UX32VD in 12.04

    - by rsilva
    I've posted this video where I show one of the problems I'm facing with my UX32VD in regard of external monitors. As you can see as soon as I plugged the external monitor the laptop screen goes black and the external monitor displays a single color (color changes between red, green and blue). However this does not always happen. For instance if I reboot the laptop with the external monitor already plugged in I get to use it but not the laptop screen. In the Displays settings the laptop screen it's not even listed. Other times it just works. The to screens, without any problems. I tried with other to different external monitor and all this apparently random issues repeat, I even tried using HDMI cable instead. Same random behavior. This question seams to cover one of this issues however the answer did not helped me. I was forced to use the 3.5.2 kernel in order to use the Intel graphics card else the Gallium something was used. Also I've Bumblebee installed.

    Read the article

  • How to Generate a Create Table DDL Script Along With Its Related Tables

    - by Compudicted
    Have you ever wondered when creating table diagrams in SQL Server Management Studio (SSMS) how slickly you can add related tables to it by just right-clicking on the interesting table name? Have you also ever needed to script those related tables including the master one? And you discovered you have dozens of related tables? Or may be no SSMS at your disposal? That was me one day. Well, creativity to the rescue! I Binged and Googled around until I found more or less what I wanted, but it was all involving T-SQL, yeah, a long and convoluted CROSS APPLYs, then I saw a PowerShell solution that I quickly adopted to my needs (I am not referencing any particular author because it was a mashup): 1: ########################################################################################################### 2: # Created by: Arthur Zubarev on Oct 14, 2012 # 3: # Synopsys: Generate file containing the root table CREATE (DDL) script along with all its related tables # 4: ########################################################################################################### 5:   6: [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null 7:   8: $RootTableName = "TableName" # The table name, no schema name needed 9:   10: $srv = new-Object Microsoft.SqlServer.Management.Smo.Server("TargetSQLServerName") 11: $conContext = $srv.ConnectionContext 12: $conContext.LoginSecure = $True 13: # In case the integrated security is not used uncomment below 14: #$conContext.Login = "sa" 15: #$conContext.Password = "sapassword" 16: $db = New-Object Microsoft.SqlServer.Management.Smo.Database 17: $db = $srv.Databases.Item("TargetDatabase") 18:   19: $scrp = New-Object Microsoft.SqlServer.Management.Smo.Scripter($srv) 20: $scrp.Options.NoFileGroup = $True 21: $scrp.Options.AppendToFile = $False 22: $scrp.Options.ClusteredIndexes = $False 23: $scrp.Options.DriAll = $False 24: $scrp.Options.ScriptDrops = $False 25: $scrp.Options.IncludeHeaders = $True 26: $scrp.Options.ToFileOnly = $True 27: $scrp.Options.Indexes = $False 28: $scrp.Options.WithDependencies = $True 29: $scrp.Options.FileName = 'C:\TEMP\TargetFileName.SQL' 30:   31: $smoObjects = New-Object Microsoft.SqlServer.Management.Smo.UrnCollection 32: Foreach ($tb in $db.Tables) 33: { 34: Write-Host -foregroundcolor yellow "Table name being processed" $tb.Name 35: 36: If ($tb.IsSystemObject -eq $FALSE -and $tb.Name -eq $RootTableName) # feel free to customize the selection condition 37: { 38: Write-Host -foregroundcolor magenta $tb.Name "table and its related tables added to be scripted." 39: $smoObjects.Add($tb.Urn) 40: } 41: } 42:   43: # The actual act of scripting 44: $sc = $scrp.Script($smoObjects) 45:   46: Write-host -foregroundcolor green $RootTableName "and its related tables have been scripted to the target file." Enjoy!

    Read the article

  • How to apply verification and validation on the following example

    - by user970696
    I have been following verification and validation questions here with my colleagues, yet we are unable to see the slight differences, probably caused by language barrier in technical English. An example: Requirement specification User wants to control the lights in 4 rooms by remote command sent from the UI for each room separately. Functional specification The UI will contain 4 checkboxes labelled according to rooms they control. When a checkbox is checked, the signal is sent to corresponding light. A green dot appears next to the checkbox When a checkbox is unchecked, the signal (turn off) is sent to corresponding light. A red dot appears next to the checkbox. Let me start with what I learned here: Verification, according to many great answers here, ensures that product reflects specified requirements - as functional spec is done by a producer based on requirements from customer, this one will be verified for completeness, correctness). Then design document will be checked against functional spec (it should design 4 checkboxes..), and the source code against design (is there a code for 4 checkboxes, functions to send the signals etc. - is it traceable to requirements). Okay, product is built and we need to test it, validate. Here comes our understanding trouble - validation should ensure the product meets requirements for its specific intended use which is basically business requirement (does it work? can I control the lights from the UI?) but testers will definitely work with the functional spec, making sure the checkboxes are there, working, labelled, etc. They are basically checking whether the requirements in functional spec were met in the final product, isn't that verification? (should not be, lets stick to ISO 12207 that only validation is the actual testing)

    Read the article

  • What to do when TDD tests reveal new functionality that is needed that also needs tests?

    - by Joshua Harris
    What do you do when you are writing a test and you get to the point where you need to make the test pass and you realize that you need an additional piece of functionality that should be separated into its own function? That new function needs to be tested as well, but the TDD cycle says to Make a test fail, make it pass then refactor. If I am on the step where I am trying to make my test pass I'm not supposed to go off and start another failing test to test the new functionality that I need to implement. For example, I am writing a point class that has a function WillCollideWith(LineSegment): public class Point { // Point data and constructor ... public bool CollidesWithLine(LineSegment lineSegment) { Vector PointEndOfMovement = new Vector(Position.X + Velocity.X, Position.Y + Velocity.Y); LineSegment pointPath = new LineSegment(Position, PointEndOfMovement); if (lineSegment.Intersects(pointPath)) return true; return false; } } I was writing a test for CollidesWithLine when I realized that I would need a LineSegment.Intersects(LineSegment) function. But, should I just stop what I am doing on my test cycle to go create this new functionality? That seems to break the "Red, Green, Refactor" principle. Should I just write the code that detects that lineSegments Intersect inside of the CollidesWithLine function and refactor it after it is working? That would work in this case since I can access the data from LineSegment, but what about in cases where that kind of data is private?

    Read the article

  • How to shift a vector based on the rotation of another vector?

    - by bpierre
    I’m learning 2D programming, so excuse my approximations, and please, don’t hesitate to correct me. I am just trying to fire a bullet from a player. I’m using HTML canvas (top left origin). Here is a representation of my problem: The black vector represent the position of the player (the grey square). The green vector represent its direction. The red disc represents the target. The red vector represents the direction of a bullet, which will move in the direction of the target (red and dotted line). The blue cross represents the point from where I really want to fire the bullet (and the blue and dotted line represents its movement). This is how I draw the player (this is the player object. Position, direction and dimensions are 2D vectors): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.drawImage(this.image, Math.round(-this.dimensions.x/2), Math.round(-this.dimensions.y/2), this.dimensions.x, this.dimensions.y); ctx.restore(); This is how I instanciate a new bullet: var bulletPosition = playerPosition.clone(); // Copy of the player position var bulletDirection = Vector2D.substract(targetPosition, playerPosition).normalize(); // Difference between the player and the target, normalized new Bullet(bulletPosition, bulletDirection); This is how I move the bullet (this is the bullet object): var speed = 5; this.position.add(Vector2D.multiply(this.direction, speed)); And this is how I draw the bullet (this is the bullet object): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.fillRect(0, 0, 3, 3); ctx.restore(); How can I change the direction and position vectors of the bullet to ensure it is on the blue dotted line? I think I should represent the shift with a vector, but I can’t see how to use it.

    Read the article

  • Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 5)

    - by hinkmond
    So, here's the finished product. I have 8 networked Raspberry Pi devices strategically placed around our Oracle Santa Clara Building 21 office. I attached a JFET transistor based EMF sensor on each device to capture any strange fluctuations in the electromagnetic field (which supposedly, paranormal spirits can change as they pass by). And, I have have a Web app (embedded in this page) which can take the readings and show a graphical display in real-time. As you can see, all the Raspberry Pi devices are blinking away green, indicating they are all operational and all sensors are working correctly. But, I don't see anything... Darn... Maybe, I have to stare at the Web app for a while. I don't know when the "alleged" ghosts in our Oracle Santa Clara office are supposed to be active, but let me know if you see anything... Oh, and by the way, Happy Halloween from the Internet of Spooky Things! See the previous posts for the full series on the steps to this cool demo: Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 1) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 2) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 3) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 4) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 5) Hinkmond

    Read the article

  • Google is not treating two Australian schools as separate sites when both are subdomains of qld.edu.au

    - by LuckySpoon
    My question relates to two websites, each of which is a "Calvary Christian College", however in two totally different locations and unrelated to each other entirely (except by name, and thus domain). All schools in the state are issued a <school-name>.qld.edu.au subdomain, in this case calvary.qld.edu.au and calvarycc.qld.edu.au. Now what's interesting is that these domains are crossing each other in sitelinks for searches such as calvary christian college townsville. The green data here is for one school (the Townsville school, as per search term), and the red data is for the other school. I've put a demotion in for this 6 months ago (we control calvary.qld.edu.au), however we're seeing no change on the results page. I have been able to get the owners of calvarycc.qld.edu.au to submit demotions for our domain, which should go in sometime in the next few days. What can we do to tell Google that these websites are not interchangeable, despite both appearing as "subdomains" of qld.edu.au? We can possibly open channels of communication with the administrators of qld.edu.au but will need to tell them what we need to change, and at this point I'm out of ideas.

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • Moving 2d camera in the y direction

    - by Alex
    I'm developing a simple game for the iphone and am struggling to work out the best way for the camera to follow the main character. The following picture hightlights the three main components: There are 3 components to this: Circle - the main character Green line - terrain Black background The terrain is simply made from an array of points (approx 20 points per screen width). The terrain is moved in the x direction relative to the black background in order to keep the circle in its position shown. The distance to move the terrain is simply: movex = circle.position.x - terrain.position.x with a constant to fix the circle at some distance from the left of the screen. I am struggling to determine the best way to position the terrain in the y plane keep the focus in the character. I want to move the terrain in the y direction smoothly and not fix it to the position of the circle, so the circle can move in the y plane. If I take the same approach as the x positioning, the character is fixed at a point on the screen and the terrain moves. I could sample some terrain points either side of the character and produce an average, but in my implementation this was not smooth. I thought another approach might be to create a camera 'line' that is a smooth version of the terrain line and make the camerea follow this, but I'm not sure if this is the optimum solution. Any advice is much appreciated!

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >