Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 565/1274 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • Image 1 becomes image 2 with sliding effect from left to right?

    - by Paul
    I would like to show a second image appearing while a "door" is closing on my character. I've got my character in the middle of the screen and a door coming from the left. When the door passes my character, I would like to have this second image appearing little by little. So far, I've gotten by with fadingOut the character and then fadingIn my second image of the character at the same position when the door is completely closed, but I would like to have both of them at the same time. (the effect that image 1 becomes image 2 when the door is sliding from left to right). Would you know how to do this with Cocos2d? Here are the images : at first, the character is blue, and the door is coming from the left : Then, behind the black door, the character becomes red, but only behind this door, so it stays blue when the door is not on him, and will become completely red when the door passes the character : EDIT : with this code, the black door hides the red and blue rectangles : (And if i add each of my layers at a different depth, and only use GL_LESS, same thing) blue.position = ccp( size.width*0.5 , size.height/2 ); red.position = ccp( size.width*0.46 , size.height/2 ); black.position = ccp( size.width*0.1 , size.height/2 ); glEnable(GL_DEPTH_TEST); [batch addChild:red z:0]; [batch addChild:black z:2]; glDepthFunc(GL_GREATER); [batch addChild:blue z:1]; glDepthFunc(GL_LESS); id action1 = [CCMoveTo actionWithDuration:3 position:ccp(size.width,size.height/2)]; [black runAction: [CCSequence actions:action1, nil]];

    Read the article

  • Moving objects colliding when using unalligned collision avoidance (steering)

    - by James Bedford
    I'm having trouble with unaligned collision avoidance for what I think is a rare case. I have set two objects to move towards each other but with a slight offset, so one of the objects is moving slightly upwards, and one of the objects is moving slightly downwards. In my unaligned collision avoidance steering algorithm I'm finding the points on the object's forward line and the other object's forward line where these two lines are the closest. If these closest points are within a collision avoidance distance, and if the distance between them is smaller than the two radii of the two object's bounding spheres, then the objects should steer away in the appropriate direction. The problem is that for my case, the closest points on the lines are calculated to be really far away from the actual collision point. This is because the two forward lines for each object are moving away from each other as the objects pass. The problem is that because of this, no steering takes place, and the two objects partially collide. Does anyone have any suggestions as to how I can correctly calculate the point of collision? Perhaps by somehow taking into account the size of the two objects?

    Read the article

  • Getting to math applications gradually

    - by den-javamaniac
    I'm currently getting a formal degree related to computation, in particular my current focus is numerical programming, scientific computing and machine learning. I'd love to apply that knowledge in game dev and expand it with statistics, probability theory, and graph theory (probably even linear algebra). The question is: which spheres of gamedev are filled with such math stuff, is it possible to advance in those without being a part of a group of people and how to get to it gradually? P.S.: I've got experience with commercial java dev and am getting my hands on C/C++ at the moment, however, I'm opened to go ahead and try Unity3D and etc.

    Read the article

  • Reversi/Othello early-game evaluation function

    - by Vladislav Il'ushin
    I've written my own Reversi player, based on the MiniMax algorithm, with Alpha-Beta pruning, but in the first 10 moves my evaluation function is too slow. I need a good early-game evaluation function. I'm trying to do it with this matrix (corresponding to the board) which determines how favourable that square is to have: { 30, -25, 10, 5, 5, 10, -25, 30,}, {-25, -25, 1, 1, 1, 1, -25, -25,}, { 10, 1, 5, 2, 2, 5, 1, 10,}, { 5, 1, 2, 1, 1, 2, 1, 5,}, { 5, 1, 2, 1, 1, 2, 1, 5,}, { 10, 1, 5, 2, 2, 5, 1, 10,}, {-25, -25, 1, 1, 1, 1, -25, -25,}, { 30, -25, 10, 5, 5, 10, -25, 30,},}; But it doesn't work well. Have you even written an early-game evaluation function for Reversi?

    Read the article

  • Why don't C++ Game Developers use the boost library?

    - by James
    So if you spend any time viewing / answering questions over on Stack Overflow under the C++ tag, you will quickly notice that just about everybody uses the boost library; some would even say that if you aren't using it, you're not writing "real' C++ (I disagree, but that's not the point). But then there is the game industry, which is well known for using C++ and not using boost. I can't help but wonder why that is. I don't care to use boost because I write games (now) as a hobby, and part of that hobby is implementing what I need when I am able to and using off-the-shelf libraries when I can't. But that is just me. Why don't game developers, in general, use the boost library? Is it performance or memory concerns? Style? Something Else? I was about to ask this on stack overflow, but I figured the question is better asked here. EDIT : I realize I can't speak for all game programmers and I haven't seen all game projects, so I can't say game developers never use boost; this is simply my experience. Allow me to edit my question to also ask, if you do use boost, why did you choose to use it?

    Read the article

  • Game engine like Unity 3D that allow me to use .NET code

    - by Pking
    I've been looking at Unity 3D for developing a 3D PC game and I really like the scene editor and how it simplifies the process of constructing 3D scenes, managing assets, animations, transitions etc. However, I don't want to restrict myself to using the Unity 3D scripts for handling every bit of game logic in the game. E.g. If I want to construct a RPG dialogue system I don't want to do it with unity 3d scripts - I'd like to use C#/.net. Also, I might want to use e.g. windows azure and sql azure as backend, and use 3rd party .net libraries such as reactive-extensions etc. Is there a .net engine out there that helps me with asset loading, animations, physics, transitions, etc. with a scene editor, but allow me to plug it into a visual studio .net project? Thanks

    Read the article

  • What would be a good filter to create 'magnetic deformers' from a depth map?

    - by sebf
    In my project, I am creating a system for deforming a highly detailed mesh (clothing) so that it 'fits' a convex mesh. To do this I use depth maps of the item and the 'hull' to determine at what point in world space the deviation occurs and the extent. Simply transforming all occluded vertices to the depths as defined by the 'hull' is fairly effective, and has good performance, but it suffers the problem of not preserving the features of the mesh and requires extensive culling to avoid false-positives. I would like instead to generate from the depth deviation map a set of simple 'deformers' which will 'push'* all vertices of the deformed mesh outwards (in world space). This way, all features of the mesh are preserved and there is no need to have complex heuristics to cull inappropriate vertices. I am not sure how to go about generating this deformer set however. I am imagining something like an algorithm that attempts to match a spherical surface to each patch of contiguous deviations within a certain range, but do not know where to start doing this. Can anyone suggest a suitable filter or algorithm for generating deformers? Or to put it another way 'compressing' a depth map? (*Push because its fitting to a convex 'bulgy' humanoid so transforms are likely to be 'spherical' from the POV of the surface.)

    Read the article

  • The purpose of using invert and transpose

    - by user699215
    In openGl ES and the World of 3D - why use the invers matrix? The thing is that I dont have any intuition to, why it is used, therefore please correct me: As fare as I understand, it is used in shaders - and can help you to figure out the opposite direction of the normals? Invers in ordinary numbers is like; The product of a number and its multiplicative inverse is 1. Observe that 3/5 * 5/3 = 1. In a matrix this will give you the Identity Matrix, which is the base coordinate system or the orion of the World space - right. But the invers is - some other coordinate system? You can use the transpose(Row-major order to Column-major order) of a square matrix to find the inverted matrix, as calculating the invers is process heavy - and the transpose is giving you the inverted matrix as a bi product? Again, I am looking for getting some intuition of this - and therefore be able to use it as intended. Thank you for any reply that will guide me in the right direction. Regards

    Read the article

  • Collision Detection problems in Voxel Engine (XNA)

    - by Darestium
    I am creating a minecraft like terrain engine in XNA and have had some collision problems for quite some time. I have checked and changed my code based on other peoples collision code and I still have the same problem. It always seems to be off by about a block. for instance, if I walk across a bridge which is one block high I fall through it. Also, if you walk towards a "row" of blocks like this: You are able to stand "inside" the left most one, and you collide with nothing in the right most side (where there is no block and is not visible on this image). Here is all my collision code: private void Move(GameTime gameTime, Vector3 direction) { float speed = playermovespeed * (float)gameTime.ElapsedGameTime.TotalSeconds; Matrix rotationMatrix = Matrix.CreateRotationY(player.Camera.LeftRightRotation); Vector3 rotatedVector = Vector3.Transform(direction, rotationMatrix); rotatedVector.Normalize(); Vector3 testVector = rotatedVector; testVector.Normalize(); Vector3 movePosition = player.position + testVector * speed; Vector3 midBodyPoint = movePosition + new Vector3(0, -0.7f, 0); Vector3 headPosition = movePosition + new Vector3(0, 0.1f, 0); if (!world.GetBlock(movePosition).IsSolid && !world.GetBlock(midBodyPoint).IsSolid && !world.GetBlock(headPosition).IsSolid) { player.position += rotatedVector * speed; } //player.position += rotatedVector * speed; } ... public void UpdatePosition(GameTime gameTime) { player.velocity.Y += playergravity * (float)gameTime.ElapsedGameTime.TotalSeconds; Vector3 footPosition = player.Position + new Vector3(0f, -1.5f, 0f); Vector3 headPosition = player.Position + new Vector3(0f, 0.1f, 0f); // If the block below the player is solid the Y velocity should be zero if (world.GetBlock(footPosition).IsSolid || world.GetBlock(headPosition).IsSolid) { player.velocity.Y = 0; } UpdateJump(gameTime); UpdateCounter(gameTime); ProcessInput(gameTime); player.Position = player.Position + player.velocity * (float)gameTime.ElapsedGameTime.TotalSeconds; velocity = Vector3.Zero; } and the one and only function in the camera class: protected void CalculateView() { Matrix rotationMatrix = Matrix.CreateRotationX(upDownRotation) * Matrix.CreateRotationY(leftRightRotation); lookVector = Vector3.Transform(Vector3.Forward, rotationMatrix); cameraFinalTarget = Position + lookVector; Vector3 cameraRotatedUpVector = Vector3.Transform(Vector3.Up, rotationMatrix); viewMatrix = Matrix.CreateLookAt(Position, cameraFinalTarget, cameraRotatedUpVector); } which is called when the rotation variables are changed: public float LeftRightRotation { get { return leftRightRotation; } set { leftRightRotation = value; CalculateView(); } } public float UpDownRotation { get { return upDownRotation; } set { upDownRotation = value; CalculateView(); } } World class: public Block GetBlock(int x, int y, int z) { if (InBounds(x, y, z)) { Vector3i regionalPosition = GetRegionalPosition(x, y, z); Vector3i region = GetRegionPosition(x, y, z); return regions[region.X, region.Y, region.Z].Blocks[regionalPosition.X, regionalPosition.Y, regionalPosition.Z]; } return new Block(BlockType.none); } public Vector3i GetRegionPosition(int x, int y, int z) { int regionx = x == 0 ? 0 : x / Variables.REGION_SIZE_X; int regiony = y == 0 ? 0 : y / Variables.REGION_SIZE_Y; int regionz = z == 0 ? 0 : z / Variables.REGION_SIZE_Z; return new Vector3i(regionx, regiony, regionz); } public Vector3i GetRegionalPosition(int x, int y, int z) { int regionx = x == 0 ? 0 : x / Variables.REGION_SIZE_X; int X = x % Variables.REGION_SIZE_X; int regiony = y == 0 ? 0 : y / Variables.REGION_SIZE_Y; int Y = y % Variables.REGION_SIZE_Y; int regionz = z == 0 ? 0 : z / Variables.REGION_SIZE_Z; int Z = z % Variables.REGION_SIZE_Z; return new Vector3i(X, Y, Z); } Any ideas how to fix this problem? EDIT 1: Graphic of the problem: EDIT 2 GetBlock, Vector3 version: public Block GetBlock(Vector3 position) { int x = (int)Math.Floor(position.X); int y = (int)Math.Floor(position.Y); int z = (int)Math.Ceiling(position.Z); Block block = GetBlock(x, y, z); return block; } Now, the thing is I tested the theroy that the Z is always "off by one" and by ceiling the value it actually works as intended. Altough it still could be greatly more accurate (when you go down holes you can see through the sides, and I doubt it will work with negitive positions). I also does not feel clean Flooring the X and Y values and just Ceiling the Z. I am surely not doing something correctly still.

    Read the article

  • Procedural object generation and unique identification

    - by 2080
    My question relates to procedural content generation and data management of the emerging objects in a database. I assume a networked game, with a server-client model. Unspecified objects in the game world are generated while the game is running with procedural algorithms (for example perlin noise). The players (/clients) can modify the properties of these objects, but have to notify the server of these changes. How could this communication address unique objects, so that both the server and the client know of which object they are speaking? Not only the inner properties of the objects can differ, but also visible, such as the position. When the player wants to select one of these objects the game has to find out the id - does anyone know which methods or algorithms can accomplish that?

    Read the article

  • Change collision action

    - by PatrickR
    I have a collision detection and its working fine, the problem is, that whenever my "bird" is hitting a "cloud", the cloud dissapers and i get some points. The same happens for the "sol" which it should, but not with the clouds. How can this be changed ? ive tryed a lot, but can seem to figger it out. Collision Code - (void)update:(ccTime)dt { bird.position = ccpAdd(bird.position, skyVelocity); NSMutableArray *projectilesToDelete = [[NSMutableArray alloc] init]; for (CCSprite *bird in _projectiles) { bird.anchorPoint = ccp(0, 0); CGRect absoluteBox = CGRectMake(bird.position.x, bird.position.y, [bird boundingBox].size.width, [bird boundingBox].size.height); NSMutableArray *targetsToDelete = [[NSMutableArray alloc] init]; for (CCSprite *cloudSprite in _targets) { cloudSprite.anchorPoint = ccp(0, 0); CGRect absoluteBox = CGRectMake(cloudSprite.position.x, cloudSprite.position.y, [cloudSprite boundingBox].size.width, [cloudSprite boundingBox].size.height); if (CGRectIntersectsRect([bird boundingBox], [cloudSprite boundingBox])) { [targetsToDelete addObject:cloudSprite]; } } for (CCSprite *solSprite in _targets) { solSprite.anchorPoint = ccp(0, 0); CGRect absoluteBox = CGRectMake(solSprite.position.x, solSprite.position.y, [solSprite boundingBox].size.width, [solSprite boundingBox].size.height); if (CGRectIntersectsRect([bird boundingBox], [solSprite boundingBox])) { [targetsToDelete addObject:solSprite]; score += 50/2; [scoreLabel setString:[NSString stringWithFormat:@"%d", score]]; } } // NÅR SKYEN BLIVER RAMT AF FUGLEN for (CCSprite *cloudSprite in targetsToDelete) { //[_targets removeObject:cloudSprite]; //[self removeChild:cloudSprite cleanup:YES]; } // NÅR SOLEN BLIVER RAMT AF FUGLEN for (CCSprite *solSprite in targetsToDelete) { [_targets removeObject:solSprite]; [self removeChild:solSprite cleanup:YES]; } if (targetsToDelete.count > 0) { [projectilesToDelete addObject:bird]; } [targetsToDelete release]; } // NÅR FUGLEN BLIVER RAMT AF ALT ANDET for (CCSprite *bird in projectilesToDelete) { //[_projectiles removeObject:bird]; //[self removeChild:bird cleanup:YES]; } [projectilesToDelete release]; }

    Read the article

  • Moving 2d camera in the y direction

    - by Alex
    I'm developing a simple game for the iphone and am struggling to work out the best way for the camera to follow the main character. The following picture hightlights the three main components: There are 3 components to this: Circle - the main character Green line - terrain Black background The terrain is simply made from an array of points (approx 20 points per screen width). The terrain is moved in the x direction relative to the black background in order to keep the circle in its position shown. The distance to move the terrain is simply: movex = circle.position.x - terrain.position.x with a constant to fix the circle at some distance from the left of the screen. I am struggling to determine the best way to position the terrain in the y plane keep the focus in the character. I want to move the terrain in the y direction smoothly and not fix it to the position of the circle, so the circle can move in the y plane. If I take the same approach as the x positioning, the character is fixed at a point on the screen and the terrain moves. I could sample some terrain points either side of the character and produce an average, but in my implementation this was not smooth. I thought another approach might be to create a camera 'line' that is a smooth version of the terrain line and make the camerea follow this, but I'm not sure if this is the optimum solution. Any advice is much appreciated!

    Read the article

  • MMORPG game balancing

    - by Gary Paluk
    I've seen a couple of examples of some game balancing techniques in books yet they are not comprehensive and not particularly aimed at MMORPGs but I'm looking for practical examples of game balancing techniques for MMORPGs. I am interested to know if anyone has documented the techniques used in popular games with proven success in this area. Ideally, any resource would cover most common types of stats and include layman mathematical models or techniques used to balance game mechanics found in advanced MMORPGs (I know it's a cliché, but WoW style) Any help would be great!

    Read the article

  • How change LOD in geometry?

    - by ChaosDev
    Im looking for simple algorithm of LOD, for change geometry vertexes and decrease frame time. Im created octree, but now I want model or terrain vertex modify algorithm,not for increase(looking on tessellation later) but for decrease. I want something like this Questions: Is same algorithm can apply either to model and terrain correctly? Indexes need to be modified ? I must use octree or simple check distance between camera and object for desired effect ? New value of indexcount for DrawIndexed function needed ? Code: //m_LOD == 10 in the beginning //m_RawVerts - array of 3d Vector filled with values from vertex buffer. void DecreaseLOD() { m_LOD--; if(m_LOD<1)m_LOD=1; RebuildGeometry(); } void IncreaseLOD() { m_LOD++; if(m_LOD>10)m_LOD=10; RebuildGeometry(); } void RebuildGeometry() { void* vertexRawData = new byte[m_VertexBufferSize]; void* indexRawData = new DWORD[m_IndexCount]; auto context = mp_D3D->mp_Context; D3D11_MAPPED_SUBRESOURCE data; ZeroMemory(&data,sizeof(D3D11_MAPPED_SUBRESOURCE)); context->Map(mp_VertexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(vertexRawData,data.pData,m_VertexBufferSize); context->Unmap(mp_VertexBuffer->mp_buffer,0); context->Map(mp_IndexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(indexRawData,data.pData,m_IndexBufferSize); context->Unmap(mp_IndexBuffer->mp_buffer,0); DWORD* dwI = (DWORD*)indexRawData; int sz = (m_VertexStride/sizeof(float));//size of vertex element //algorithm must be here. std::vector<Vector3d> vertices; int i = 0; for(int j = 0; j < m_VertexCount; j++) { float x1 = (((float*)vertexRawData)[0+i]); float y1 = (((float*)vertexRawData)[1+i]); float z1 = (((float*)vertexRawData)[2+i]); Vector3d lv = Vector3d(x1,y1,z1); //my useless attempts if(j+m_LOD+1<m_RawVerts.size()) { float v1 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD]]); float v2 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD+1]]); if(v1>v2) lv = m_RawVerts[dwI[j+1]]; else if(v2<v1) lv = m_RawVerts[dwI[j+2]]; } (((float*)vertexRawData)[0+i]) = lv.x; (((float*)vertexRawData)[1+i]) = lv.y; (((float*)vertexRawData)[2+i]) = lv.z; i+=sz;//pass others vertex format values without change } for(int j = 0; j < m_IndexCount; j++) { //indices ? } //set vertexes to device UpdateVertexes(vertexRawData,mp_VertexBuffer->getSize()); delete[] vertexRawData; delete[] indexRawData; }

    Read the article

  • How should I load level data in java?

    - by Matthew G.
    I'm setting up my engine for a certain action/arcade game to have a set of commands that would look something like this. Set landscape to grass Create rocks at ... Create player at X, Y Set goal to "Get to point X Y" Spawn enemy at X, Y I'd then have each object knowing what it has to do, and acting on its own. I've been thinking about how to store this data. External data files could be parsed by a level class, and certain objects can be spawned through that. I could also create a base level class and extend it for each level, but that'd create a large amount of classes. Another idea is to have one level parser class, but have a case for each level. This would be extremely silly and bulky, but I mention it because I found that I did this at 2 AM last night. I'm finally getting why I have to plan out my inheritances, though. RIP project. I might be completely missing another option.

    Read the article

  • Which purpose do armor points serve?

    - by Bane
    I have seen a mechanic which I call "armor points" in many games: Quake, Counter Strike, etc. Generally, while the player has these armor points, he takes less damage. However, they act in a similar fashion that health points do: you lose them by taking said damage. Why would you design such a feature? Is this just health 2.0, or am I missing something? To me, armor only makes sense in, for example, RPG games, where it is a constant that determines your resistance. But I don't see why would it need to be reduceable during combat.

    Read the article

  • How do I keep a 3D model on the screen in OpenGL?

    - by NoobScratcher
    I'm trying to keep a 3D model on the screen by placing my glDrawElement functions inside the draw function with the declarations at the top of .cpp. When I render the model, the model attaches it self to the current vertex buffer object. This is because my whole graphical user interface is in 2D quads except the window frame. Is there a way to avoid this from happening? or any common causes of this? Creating the file object: int index = IndexAssigner(1, 1); //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); //Make another string //string line; points.push_back(Point()); Point p; int face[4]; Model rendering code: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); cout << "Size Of Point" << sizeof(Point) << endl; GLuint vertexbuffer; glGenVertexArrays(1, &vao[3]); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size()*sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, num_bytes, &points[0]); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, points.size(), &points[0]); glEnableClientState(GL_INDEX_ARRAY); glIndexPointer(GL_FLOAT, faces.size(), faces.data()); glEnableVertexAttribArray(0); glDrawElements(GL_QUADS, points.size(), GL_UNSIGNED_INT, points.data()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data());

    Read the article

  • Calculate velocity of a bullet ricocheting on a circle

    - by SteveL
    I made a picture to demostrate what I need,basecaly I have a bullet with velocity and I want it to bounce with the correct angle after it hits a circle Solved(look the accepted answer for explain): Vector.vector.set(bullet.vel); //->v Vector.vector2.setDirection(pos, bullet.pos); //->n normal from center of circle to bullet float dot=Vector.vector.dot(Vector.vector2); //->dot product Vector.vector2.mul(dot).mul(2); Vector.vector.sub(Vector.vector2); Vector.vector.y=-Vector.vector.y; //->for some reason i had to invert the y bullet.vel.set(Vector.vector);

    Read the article

  • How do I find the closest points(thereby forming a polygon) enclosing a particular point?(see image)

    - by nilspin
    I am working with a game engine, and my task is to add code for simulating fracture of rigid meshes. Right now I'm only working on breaking a cube. I am using Voronoi's algorithm to make a (realistic)fractured shard and I am using the half-plane method to generate a voronoi cell. Now the way I do this is for every seed point, I make planes that are perpendicular bisector planes(the straight black lines in the image) with rest of the seed points and I calculate the intersections of all these planes to give me distinct points(all the orange dots). I've gotten this far. Out of all these calculated intersection points, I only need the ones that are closest and enclosing the seed point(the points encircled in red) and I need to discard all the rest. Information that I have : 1) Plane equations of all planes(defined by normalized normal vectors and their distance from origin) 2) Points of intersection(that I've calculated) Can anybody help me find out how I can find the points encircled in red? Thanks.

    Read the article

  • What is the best way to implement collision detection using Bullet physics engine and a track generated from a curve?

    - by tigrou
    I am developing a small racing game were the track is generated from a curve. As said above, the track is generated, but not infinite. The track of one level could fit with no problem in memory and will contain a reasonably small amount of triangles. For collisions, I would like to use Bullet physics engine and know what is the best way to handle collisions with the track efficiently. NOTE : The track will be stored as a static rigid body (mass = 0). The player will be represented by a sphere shape for collisions. Here is some possibilities i have in mind : Create one rigid body, then, put all triangles of the track (except non collidable stuff) into it. Result : 1 body with many triangles (eg : 30000 triangles) Split the track into several sections (eg: 10 sections). Then, for each section, create a rigid body and put corresponding triangles in it. Result : small amount of bodies with relatively small amount of triangles (eg : 1500 triangles per section). Split the track into many sub-sections (eg : 1200 sections). Here one subsection = very small step when generating the curve. Again for each sub-section, create a body and put triangles in it. Result : many bodies with very small amount of triangles (eg : 20 triangles). Advantage : it could be possible to "extra data" to each of the subsection, that could be used when handling collisions. Same as 2, but only put sections N and N+1 in physics engine (where N = current section where the player is). When player reach section N+1, unload section N and load section N+2 and so on... Issue : harder to implement, problems if the player suddenly "jump" from one section to another (eg : player fly away from section N, and fall on section N + 4 that was underneath : no collision handled, player will fall into void ) Same as 4, but with many sub-sections. Issues : since subsections are very small there will be constantly new bodies added and removed to physics engine at runtime. Possibilities for player to accidently skip some sections and fall into the void are higher than 4.

    Read the article

  • What does a Game Designer do? what skills do they need?

    - by xenoterracide
    I know someone who is thinking about getting into game design, and I wondered, what does the job game designer entail? what tools do you have to learn how to use? what unique skills do you need? what exactly is it you'd do from day to day. I may be wording this a bit wrong because I'm not sure if the college program is become a game designer or learn game design. but I think the same questions apply either way.

    Read the article

  • Selling your iphone games.

    - by Artemix
    Hi. So, long story short, some days ago I pusblished an iPhone game, I think the game wasnt that bad tbh, and still I got only 10 sells at $0.99. Are they any publishers, sponsors, or distributors to make your game "visible" on the app store market?, or the only thing you need is to have an amazing game and thats all? Somehow I think that even if you have an awesome game if you dont do that "marketing magic" correctly you will not exist in the store. Now Im making a second game, completly different, and I want to know how to do things right. If anyone knows something about this topic, let me know. Thx in advance.

    Read the article

  • How do I design a game framework for fast reaction to user input?

    - by Miro
    I've played some games at cca 30 fps and some of them had low reaction time - cca 0.1sec. I hadn't knew why. Now when I'm designing my framework for crossplatform game, I know why. Probably they've been preparing new frame during rendering the previous. RENDER 1 | RENDER 2 | RENDER 3 | RENDER 4 PREPARE 2 | PREPARE 3 | PREPARE 4 | PREPARE 5 I see first frame when second frame is being rendered and third frame being prepared. If I react in that time to 1st frame it will result in forth frame. So it takes 3/FPS seconds to appear results. In 30 fps it would be 100ms, what is quite bad. So i'm wondering what should I design my framework to response to user interaction quickly?

    Read the article

  • Marmalade SDK views navigation concepts

    - by Mina Samy
    I want to develop a simple multi-Activity Android game using Marmalade SDK. I looked at the SDK samples and found that most of them are a single view (Android Activity) apps. The Navigation model in Android is a stack-like model where views are pushed and popped. I can't find whether Marmalade has a similar model or it uses a different approach. would you please explain this to me and provide tutorials and samples to navigation in Marmalade if possible ? Thanks

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >