Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 480/1021 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • iOS Game that Runs Continuously in Background

    - by user2913669
    I'm trying to understand the most logical way of creating an iOS game that runs continuously in the background. For example.. you have tower and enemy waves. The game has endless enemy waves even when the game exits. When you open the game again, it will retrieve the data that occurred when the app was closed. I assume a database on a server would be the best solution. The values continuously increment on the server. The game connects to the server and retrieves the specific user's updated game data.

    Read the article

  • State changes in entities or components

    - by GriffinHeart
    I'm having some trouble figuring how to deal with state management in my entities. I don't have trouble with Game state management, like pause and menus, since these are not handled as an entity component system; just with state in entities/components. Drawing from Orcs Must Die as an example, I have my MainCharacter and Trap entities which only have their components like PositionComponent, RenderComponent, PhysicsComponent. On each update the Entity will call update on its components. I also have a generic EventManager with listeners for different event types. Now I need to be able to place the traps: first select the trap and trap position then place the trap. When placing a trap it should appear in front of the MainCharacter, rendered in a different way and following it around. When placed it should just respond to collisions and be rendered in the normal way. How is this usually handled in component based systems? (This example is specific but can help figure out the general way to deal with entities states.)

    Read the article

  • I am looking to create realistic car movement using vectors

    - by bobthemac
    I have goggled how to do this and found this http://www.helixsoft.nl/articles/circle/sincos.htm I have had a go at it but most of the functions that were showed didn't work I just got errors because they didn't exist. I have looked at the cos and sin functions but don't understand how to use them or how to get the car movement working correctly using vectors. I have no code because I am not sure what to do sorry. Any help is appreciated. EDIT: I have restrictions that I must use the TL engine for my game, I am not allowed to add any sort of physics engine. It must be programmed in c++. Here is a sample of what i got from trying to follow what was done in the link I provided. if(myEngine->KeyHeld(Key_W)) { length += carSpeedIncrement; } if(myEngine->KeyHeld(Key_S)) { length -= carSpeedIncrement; } if(myEngine->KeyHeld(Key_A)) { angle -= carSpeedIncrement; } if(myEngine->KeyHeld(Key_D)) { angle += carSpeedIncrement; } carVolocityX = cos(angle); carVolocityZ = cos(angle); car->MoveX(carVolocityX * frameTime); car->MoveZ(carVolocityZ * frameTime);

    Read the article

  • Is it considered poor programming to do this with xna components?

    - by Rob
    I created my own Menu System that is event driven. In order to have a loading screen and multithreaded loading to work, I devised this sort of implementation: //Let's check if the game is done loading. if (_game != null) { _gameLoaded = _game.DoneLoading; } //This means the game is loading still, //therefore the loading screen should be active. if (!_gameLoaded && _gameActive) { _gameScreenList[2].UpdateMenu(); } //The loading screen was selected. if (_gameScreenList[2].CurrentState == GameScreen.State.Shown && !_gameActive) { Components.Add(_game = new ParadoxGame(this)); _game.Initialize(); //Initializes the Game so that the loading can begin. _gameActive = true; } In the XNA Game Component that contains the actual game, in the LoadContent method I simply created a new Thread that calls another method ThreadLoad that has all the actual loading. I also have a boolean variable called DoneLoading in the XNA Game Component that is set to true at the end of the ThreadLoad. I am wondering if this is a poor implementation.

    Read the article

  • Designing a simple snake A.I

    - by DillPixel
    I've looked at some stuff online regarding this specific topic, and a lot of the info that I read involved graphs and path finding. I really don't want to get involved in something too complex & out of my level, and also I don't need my snake to be that intelligent (it will be a large board with the snake not growing in size on every munch). How could you structure a simpler AI for the snake that gets the job done relatively well? I would be able to get the snake to move towards the food item correctly, but my issue is that I'm not sure how to deal with the snake colliding with itself. Say the snake has a look ahead, and it finds that its tail is in the way, it could change direction, but what happens next? Any ideas on how to tackle this? Should the snake build an instruction set from every square, or should it think on the go?

    Read the article

  • How can I plot a radius of all reachable points with pathfinding for a Mob?

    - by PugWrath
    I am designing a tactical turn based game. The maps are 2d, but do have varying level-layers and blocking objects/terrain. I'm looking for an algorithm for pathfinding which will allow me to show an opaque shape representing all of the possible max-distance pixels that a mob can move to, knowing the mob's max pixel distance. Any thoughts on this, or do I just need to write a good pathfinding algorithm and use it to find the cutoff points for any direction in which an obstacle exists?

    Read the article

  • HUD layer not being added on my scene

    - by Shailesh_ios
    I have a CCScene which already holds my gameLayer and I am trying to add HUD layer on that.But the HUD layer is not getting added in my scene, I can say that because I have set up a CCLabel on HUD layer and when I run my project, I cannot see that label. Here's what I am doing : In my gameLayer: +(id) scene { CCScene *scene = [CCScene node]; GameScreen *layer = [GameScreen node]; [scene addChild: layer]; HUDclass * otherLayer = [HUDclass node]; [scene addChild:otherLayer]; layer.HC = otherLayer;// HC is reference to my HUD layer in @Interface of gameLayer return scene; } And then in my HUD layer I have just added a CCLabelTTF in its init method like this : -(id)init { if ((self = [super init])) { CCLabelTTF * label = [CCLabelTTF labelWithString:@"IN WEAPON CLASS" fontName:@"Arial" fontSize:15]; label.position = ccp(240,160); [self addChild:label]; } return self; } But now when I run my project I dont see that label, What am I doing wrong here ..? Any Ideas.. ? Thanks in advance for your time.

    Read the article

  • How can I get accurate collision resolution on the corners of rectangles?

    - by ssb
    I have a working collision system implemented, and it's based on minimum translation vectors. This works fine in most cases except when the minimum translation vector is not actually in the direction of the collision. For example: When a rectangle is on the far edge on any side of another rectangle, a force can be applied, in this example down, the pushes one rectangle into the other, particularly a static object like a wall or a floor. As in the picture, the collision is coming from above, but because it's on the very edge, it translates to the left instead of back up. I've searched for a while to find an approach but everything I can find deals with general corner collisions where my problem is only in this one limited case. Any suggestions?

    Read the article

  • Wavefront mesh: determine which face a point belongs to?

    - by Mina Samy
    I have a 3D mesh Wavefront .obj file. Is there any algorithm that takes an arbitrary point coordinates as input and determines which face of the mesh that point belongs to ?? The mesh is rendered on the screen, then the user clicks on it, I want to determine which part of the mesh the user has clicked on ? Here's the code using LibGDX: Vector3 intersection=new Vector3(); Ray ray=camera.getPickRay(x, y); //vertices is an array that hold the coordinates of the mesh boolean ok=Intersector.intersectRayTriangles(ray, vertices, intersection); Thanks

    Read the article

  • How can I get into the educational market?

    - by mmyers
    I believe that my current game project is very well-suited for educational gaming; so well-suited, in fact, that I know of several different schools (one community college and at least one or two high schools) that have used versions of it at some time or another. And that's without any such marketing on my part. I'd like to expand on this part of the potential user base. But I have absolutely no experience in dealing with school administrations. How can I break into this market enough to be noticed? And on a side note, could marketing the game as educational kill the gamers market?

    Read the article

  • Which physics phenomenons can be simulated properly with Box2d or bullet physics? [on hold]

    - by user3585425
    Knowing that box2d or bullet physics can't simulate Newton's cradle (because of multiple bodies being in contact at the same time if I understand correctly), is there a sets of physics phenomenons that imply two or more objects that still can be simulated properly ? For example, I'm thinking about lightweight objects launched towards heavyweight objects. If the object is destroyed on contact, this would not make a difference if the energy is not transmitted correctly on impact.

    Read the article

  • Confusion on HLSL Samplers. Can I Set Samplers Inside Functions?

    - by Kyle Connors
    I'm trying to create a system where I can instance a quad to the screen, however I've run into a problem. Like I said, I'm trying to instance the quad, so I'm trying to use the same geometry several times, and I'm trying to do it in one draw call. The issue is, I want some quads to use different textures, but I can't figure out how to get the data into a sampler so I can use it in the pixel shader. I figured that since we can simply pass in the 4 bytes of our IDirect3DTexture9* to set the global texture, I can do so when passing in my dynamic buffer. (Which also stores each objects world matrix and UV data) Now that I'm sending the data, I can't figure how to get it into the sampler, and I really want to assume that it's simply not possible. Is there any way I could achieve this?

    Read the article

  • Creating Gun objects with upgrades?

    - by zardon
    I have a series of guns in my game. I use the Gun class/object like this: (Just an example) @interface Gun : NSObject { NSString *name; // Six-shooter NSNumber *cost; NSNumber *clipPrice; // ie: 700 NSNumber *clipCapacity; // 6 NSNumber *ammoCapacity; // 6 NSNumber *damage; // 0-10 NSNumber *accuracy; // 0-10 NSNumber *fireRate; // 0-10 NSNumber *range; // 0-10 // Not sure if I have all the stats, but this is fine for now } Lets say I want to have 3 upgrades per gun. My problem is I am not sure how to do this. Examples: increase fire-rate increase range increase accuracy silencer double ammo capacity (ie: Drum) double clip capacity (ie: Taped magazine) Thus my question is, I'd like to implement an upgrade system to guns but I am not sure how to do it. Would there be an Upgrade object which is a child to the Gun class, or would it be seperate class altogether. Thanks for your time.

    Read the article

  • Looking for a royalty free sci-fi sounding song thats 1:00+ long, and costs <= $5

    - by CyanPrime
    I'm looking for a royalty free sci-fi sounding song thats 1:00+ long, and costs less then, or is $5 usd. I want to have a nice BGM for my engine demo I'm going to release for a game I'm planing on having go commercial. I don't want to spend too much money on it, so my limit is $5 usd. I want it to be at least a 1:00 in length. Where should I look? Or even better, do you have a link to a song that meets the criteria?

    Read the article

  • What is a good practice for 2D scene graph partitioning for culling?

    - by DevilWithin
    I need to know an efficient way to cull the scene graph objects, to render exclusively the ones in the view, and as fast as possible. I am thinking of doing it the following way, having in each object a local boundingbox which holds the object bounds, and a global boundingbox which holds the bounds of the object and all children. When a camera is moved, the render list is updated by traversing the global boundingboxes. When only the object is being moved, it tries to enlarge or shrink the ancestors global boundingboxes, and in the end updating or not the renderlist. What do you think of this approach? Do you think it will provide a fast and efficient culling? Also, because the render list is a contiguous list, it could accelerate the rendering, right? Any further tips for a 2D scene graphs are highly appreciated!

    Read the article

  • Scale DIV with tiles

    - by user15350
    I am trying to create a repeating background. I have a main DIV with a grid of small 16x16 DIVs. I am trying to scale the main DIV in CSS; when the small DIVs simply have a red background color everything works great, but when there is a background image in the small DIVs then borders become visible between the tiles. This image explains the problem: http://cl.ly/FpNW/o Check the HTML in these examples: With BG-COLOR: http://jsfiddle.net/pTLXw/ With BG-IMG: http://jsfiddle.net/vkpuY/ Does anyone know what is causing this problem and how to fix it? If it is not possible to fix while using DIV, is there another way to do this? Thanks you so much!

    Read the article

  • About floating point precision and why do we still use it

    - by system_is_b0rken
    Floating point has always been troublesome for precision on large worlds. This article explains behind-the-scenes and offers the obvious alternative - fixed point numbers. Some facts are really impressive, like: "Well 64 bits of precision gets you to the furthest distance of Pluto from the Sun (7.4 billion km) with sub-micrometer precision. " Well sub-micrometer precision is more than any fps needs (for positions and even velocities), and it would enable you to build really big worlds. My question is, why do we still use floating point if fixed point has such advantages? Most rendering APIs and physics libraries use floating point (and suffer it's disadvantages, so developers need to get around them). Are they so much slower? Additionally, how do you think scalable planetary engines like outerra or infinity handle the large scale? Do they use fixed point for positions or do they have some space dividing algorithm?

    Read the article

  • Rendering output to arbitary quadrilateral

    - by Trainee4Life
    I want to render output on a device to an arbitary quadirlateral, i.e. project texture on to a quad. What are the possible ways I could implement it? Till now, I have investigated: Drawing textured quadrilateral - Quads look odd as they are composed of triangles, and the distortion looks odd. The issue I'm facing has been discussed here and here as well. Setting transformation on device - Need help in getting this implemented. Pixel shaders - Not able to implement the desired effect. Any help would be much appreciated.

    Read the article

  • XNA - Am I screwing up the LoadContent for Texture2D?

    - by Bombcode
    I've read forum threads and questions and I done just about everything. I need to know what am I screwing up here. Here's the code in the constructor. Content.RootDirectory = "GameStateContent"; //Content.RootDirectory = "Content"; And this is in the LoadContent method menu = this.Content.Load<Texture2D>("mainmenu"); And here's the image screen shot of the folder structure. http://i.imgur.com/HnndE.png Any helps on this? Thanks.

    Read the article

  • Bump mapping Problem GLSL

    - by jmfel1926
    I am having a slight problem with my Bump Mapping project. Although everything works OK (at least from what I know) there is a slight mistake somewhere and I get incorrect shading on the brick wall when the light goes to the one side or the other as seen in the picture below: The light is on the right side so the shading on the wall should be the other way. I have provided the shaders to help find the issue (I do not have much experience with shaders). Shaders: varying vec3 viewVec; varying vec3 position; varying vec3 lightvec; attribute vec3 tangent; attribute vec3 binormal; uniform vec3 lightpos; uniform mat4 cameraMat; void main() { gl_TexCoord[0] = gl_MultiTexCoord0; gl_Position = ftransform(); position = vec3(gl_ModelViewMatrix * gl_Vertex); lightvec = vec3(cameraMat * vec4(lightpos,1.0)) - position ; vec3 eyeVec = vec3(gl_ModelViewMatrix * gl_Vertex); viewVec = normalize(-eyeVec); } uniform sampler2D colormap; uniform sampler2D normalmap; varying vec3 viewVec; varying vec3 position; varying vec3 lightvec; vec3 vv; uniform float diffuset; uniform float specularterm; uniform float ambientterm; void main() { vv=viewVec; vec3 normals = normalize(texture2D(normalmap,gl_TexCoord[0].st).rgb * 2.0 - 1.0); normals.y = -normals.y; //normals = (normals * gl_NormalMatrix).xyz ; vec3 distance = lightvec; float dist_number =length(distance); float final_dist_number = 2.0/pow(dist_number,diffuset); vec3 light_dir=normalize(lightvec); vec3 Halfvector = normalize(light_dir+vv); float angle=max(dot(Halfvector,normals),0.0); angle= pow(angle,specularterm); vec3 specular=vec3(angle,angle,angle); float diffuseterm=max(dot(light_dir,normals),0.0); vec3 diffuse = diffuseterm * texture2D(colormap,gl_TexCoord[0].st).rgb; vec3 ambient = ambientterm *texture2D(colormap,gl_TexCoord[0].st).rgb; vec3 diffusefinal = diffuse * final_dist_number; vec3 finalcolor=diffusefinal+specular+ambient; gl_FragColor = vec4(finalcolor, 1.0); }

    Read the article

  • Confusion about Rotation matrices from Euler Angles

    - by xEnOn
    I am trying to learn more about Euler Angles so as to help myself in understanding how I can control my camera better in the game. I came across the following formula that converts Euler Angles to rotation matrices: In the equation, I could see that the first matrix from the left is the rotation matrix about x-axis, the second is about y-axis and the third is about z-axis. From my understanding about ordinary matrix transformations, the later transformation is always applied to the right hand side. And if I'm right about this, then the above equation should have a rotation order starting from rotating about z-axis, y-axis, then finally x-axis. But, from the symbols it seems that the rotation order start rotating about x-axis, then y-axis, then finally z-axis. What should the actual order of the rotation be? Also, I am confuse about if the input vector, in this case, would be a row vector on the left, or a column vector on the right?

    Read the article

  • XNA model drawing problem

    - by user1990950
    When using this code: public static void DrawModel(Model model, Vector3 position, Vector3 offset, float xRotation, float yRotation, float zRotation, float allrot, float xScale, float yScale, float zScale) { position.Y *= -1; offset.Y *= -1; Matrix worldMatrix = ((Matrix.CreateRotationZ(MathHelper.ToRadians(zRotation)) * Matrix.CreateRotationX(MathHelper.ToRadians(xRotation))) * Matrix.CreateRotationY(MathHelper.ToRadians(yRotation))) * (Matrix.CreateTranslation(offset) * Matrix.CreateRotationY(MathHelper.ToRadians(allrot))) * Matrix.CreateScale(xScale, yScale, zScale); worldMatrix *= Matrix.CreateTranslation(position) * theCamera.GetTransformation() * Matrix.CreateTranslation(new Vector3(-(graphics.GraphicsDevice.Viewport.Width / 2), graphics.GraphicsDevice.Viewport.Height / 2, 0)); foreach (ModelMesh mesh in model.Meshes) { for (int i = 0; i < mesh.Effects.Count; i++) { ((BasicEffect)mesh.Effects[i]).EnableDefaultLighting(); ((BasicEffect)mesh.Effects[i]).World = worldMatrix; ((BasicEffect)mesh.Effects[i]).View = viewMatrix; ((BasicEffect)mesh.Effects[i]).Projection = projectionMatrix; } mesh.Draw(); } } The model rotates and then scales. It should scale and then rotate, but whenever I try to change it, it won't work.

    Read the article

  • Why is my shadowmap all white?

    - by Berend
    I was trying out a shadowmap. But all my shadow is white. I think there is some problem with my homogeneous component. Can anybody help me? The rest of my code is written in xna Here is the hlsl code I used float4x4 xWorld; float4x4 xView; float4x4 xProjection; struct VertexToPixel { float4 Position : POSITION; float4 ScreenPos : TEXCOORD1; float Depth : TEXCOORD2; }; struct PixelToFrame { float4 Color : COLOR0; }; //------- Technique: ShadowMap -------- VertexToPixel MyVertexShader(float4 inPos: POSITION0, float3 inNormal: NORMAL0) { VertexToPixel Output = (VertexToPixel)0; float4x4 preViewProjection = mul(xView, xProjection); float4x4 preWorldViewProjection = mul(xWorld, preViewProjection); Output.Position =mul(inPos, mul(xWorld, preViewProjection)); Output.Depth = Output.Position.z / Output.Position.w; Output.ScreenPos = Output.Position; return Output; } float4 MyPixelShader(VertexToPixel PSIn) : COLOR0 { PixelToFrame Output = (PixelToFrame)0; Output.Color = PSIn.ScreenPos.z/PSIn.ScreenPos.w; return Output.Color; } technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 MyVertexShader(); PixelShader = compile ps_2_0 MyPixelShader(); } }

    Read the article

  • Possible to draw a select portion of a render target? (in XNA)

    - by TheBroodian
    I'm going to try to do this in reverse fashion and skip straight to the punch line, and then give the back story afterward: Is it possible to, after drawing a scene to a RenderTarget2D, only draw a select portion of the RenderTarget2D, if I don't want the entire thing? I'm using xTile to manage world data in my game (it's a great piece of work, colinvella [xTile's author] has made an amazing product), and for the most part it works great. xTile supports parallax effects in its layers to add some wonderful depth to 2d scenes, which was great, until I implemented a dynamic split-screen system into my game. Wanted to make a co-op game that wouldn't require players to be in close proximity to each other, so I made it so that if the players separate too far apart, the singular full-screen viewport 'snaps-apart', and is replaced by two split-screen viewports, which then smoothly transition to their respective player targets. The effect is pretty smooth aside from the part where the parallax backgrounds become skewed once the viewports split, because xTile's ratio for handling parallax effects is dependent upon viewport size. This is unfortunate, because the effect would otherwise be really snazzy, but the backgrounds become pretty heavily affected when the game goes from single-viewport to multi-viewport. So, Colinvella suggests using rendertargets to record the scene at full viewport size, and then only drawing a portion of it. But as far as I can tell, that isn't even possible? That being said, I've never even used render targets before, so I'm still learning, hence the question here.

    Read the article

  • Could someone explain why my world reconstructed from depth position is incorrect?

    - by yuumei
    I am attempting to reconstruct the world position in the fragment shader from a depth texture. I pass in the 8 frustum points in world space and interpolate them across fragments and then interpolate from near to far by the depth: highp float depth = (2.0 * CameraPlanes.x) / (CameraPlanes.y + CameraPlanes.x - texture( depthTexture, textureCoord ).x * (CameraPlanes.y - CameraPlanes.x)); // Reconstruct the world position from the linear depth highp vec3 world = mix( nearWorldPos, farWorldPos, depth ); CameraPlanes.x is the near plane CameraPlanes.y is the far. Assuming that my frustum positions are correct, and my depth looks correct, why is my world position wrong? (My depth texture is of format GL_DEPTH_COMPONENT32F if that matters) Thanks! :D Update: Screenshot of world position http://imgur.com/sSlHd So you can see it looks nearly correct. However as the camera moves, the colours (positions) change, which they shouldnt. I can get this to work, if I do the following: Write this into the depth attachment in the previous pass: gl_FragDepth = gl_FragCoord.z / gl_FragCoord.w / CameraPlanes.y; and then read the depth texture like so: depth = texture( depthTexture, textureCoord ).x However this will kill the hardware z buffer optimizations.

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >