Search Results

Search found 25377 results on 1016 pages for 'development'.

Page 406/1016 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • Game Resource Generation

    - by Darthg8r
    I am currently building a game that has a "City" entity. These cities generate and consume resources such as food variably over a period of time. I need to be able query the server often to find exactly how much food the city at any given point. These queries can take place multiple times per minute. There could also be 400,000 cities to track at a given time. How would you handle tracking these resources? Would you do it in real time, keeping an instance of the city in memory on the server, with some sort of a snapshot in time of the resources, then computing the growth/consumption from that snapshot time for subsequent queries? Would you work exclusively with a database, using a similar "snapshoting" scheme? Maybe a mixture of the 2, caching recently queried cities in memory for a period of time? There is also a lot of other data that each city needs to track. A player can queue units to build in a barrack. The armies available in the city will need to be updated as units complete. I'm interested in everyone's input on where/when/how you'd manage the real time data.

    Read the article

  • Moving camera, or camera with discrete "screens"?

    - by Jacob Millward
    I'm making a game with a friend, but having trouble deciding on a camera style. The basic idea for the game, is having a randomly generated 2-dimensional world, with settlements in it. These settlements would have access to different resources, and it would be the job of the player to create bridges and ladders and links between these villages so they can trade. The player would advance personally by getting better gear, fighting monsters and looking for materials in the world, in order to craft and trade them at the settlements. My friend wants to use an old-style camera, where the world is split into a discrete number of screens that the player moves between. Similar to early Zelda dungeons, or Knytt Stories. This is opposite to me, as I want a standard camera that follows the player around as I feel the split-screen style camera limits the game. Can anyone argue the case either way? We've hit a massive roadblock here and can't seem to get past it.

    Read the article

  • Timing Calculations for Opengl ES 2.0 draw calls

    - by Arun AC
    I am drawing a cube in OpenGL ES 2.0 in Linux. I am calculating the time taken for each frame using below function #define NANO 1000000000 #define NANO_TO_MICRO(x) ((x)/1000) uint64_t getTick() { struct timespec stCT; clock_gettime(CLOCK_MONOTONIC, &stCT); uint64_t iCurrTimeNano = (1000000000 * stCT.tv_sec + stCT.tv_nsec); // in Nano Secs uint64_t iCurrTimeMicro = NANO_TO_MICRO(iCurrTimeNano); // in Micro Secs return iCurrTimeMicro; } I am running my code for 100 frames with simple x-axis rotation. I am getting around 200 to 220 microsecs per frame. that means am i getting around (1/220microsec = 4545) FPS Is my GPU is that fast? I strongly doubt this result. what went wrong in the code? Regards, Arun AC

    Read the article

  • How to show a minimap in a 3d world

    - by Bubblewrap
    Got a really typical use-case here. I have large map made up of hexagons and at any given time only a small section of the map is visible. To provide an overview of the complete map, i want to show a small 2d representation of the map in a corner of the screen. What is the recommended approach for this in libgdx? Keep in mind the minimap must be updated when the currently visible section changes and when the map is updated. I've found SpriteBatch, but the warning label on it made me think twice: A SpriteBatch is a pretty heavy object so you should only ever have one in your program. I'm not sure i'm supposed to use the one SpriteBatch that i can have on the minimap, and i'm also not sure how to interpret "heavy" in this context. Another thing to possibly keep in mind is that the minimap will probably be part of a larger UI...is there any way to integrate these two?

    Read the article

  • Why did the old 3D games have "jittery" graphics?

    - by dreta
    I've been playing MediEvil lately and it got me wondering, what causes some of the old 3D games have "flowing" graphics when moving? It's present in games like Final Fantasy VII, MediEvil, i remember Dungeon Keeper 2 having the same thing in zoom mode, however f.e. Quake 2 didn't have this "issue" and it's just as old. The resolution doesn't seem to be the problem, everything is rendered perfectly fine when you stand still. So is the game refreshing slowly or it's something to do with buffering?

    Read the article

  • How to create Executable Jar

    - by Siddharth
    When I try to create a jar file i found the following error, so please someone help me to out of this. Exception in thread "LWJGL Application" com.badlogic.gdx.utils.GdxRuntimeException: Couldn't load file: img/black_ring.png at com.badlogic.gdx.graphics.Pixmap.(Pixmap.java:137) at com.badlogic.gdx.graphics.glutils.FileTextureData.prepare(FileTextureData.java:55) at com.badlogic.gdx.graphics.Texture.load(Texture.java:175) at com.badlogic.gdx.graphics.Texture.create(Texture.java:159) at com.badlogic.gdx.graphics.Texture.(Texture.java:133) at com.badlogic.gdx.graphics.Texture.(Texture.java:122) at com.badlogic.runningball.UserBall.(UserBall.java:19) at com.badlogic.runningball.GameScreen.(GameScreen.java:25) at com.badlogic.runningball.RunningBall.create(RunningBall.java:12) at com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop(LwjglApplication.java:126) at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:113) Caused by: com.badlogic.gdx.utils.GdxRuntimeException: File not found: img/black_ring.png (Internal) at com.badlogic.gdx.files.FileHandle.read(FileHandle.java:108) at com.badlogic.gdx.files.FileHandle.length(FileHandle.java:364) at com.badlogic.gdx.files.FileHandle.readBytes(FileHandle.java:156) at com.badlogic.gdx.graphics.Pixmap.(Pixmap.java:134) ... 10 more

    Read the article

  • Android Live Testing

    - by Matthew Dockerty
    I am making a game for android and in it I am using sensors which are not available in the emulator. At the moment I am connecting my device and transferring the apk, then installing to test but that is a pain to do, and I have gotten to the stage where I need to start logging values for debugging. I have gone into the run configs of my app and set it to prompt me to pick a device, but my device is never in the list when it is connected to my PC and I try to run it. How am I supposed to set it up to work properly? Thanks for the help.

    Read the article

  • How to draw a global day night curve

    - by Lumis
    I see many applications which have world-clock map, and I would like to make my own to enhance some of my mobile apps. I wonder if anybody has any knowledge where to start, how to draw a curved shadow representing the dawn and the sunset on the globe. See the example: http://aa.usno.navy.mil/imagery/earth/map?year=2012&month=6&day=19&hour=14&minute=47 I think that this curve goes up and down and creates an artic day/night etc Perhaps there is some acceptable approximation formula without a need to load data for each our and each global parallel and meridian...

    Read the article

  • Will a polled event system cause lag for a server?

    - by Milo
    I'm using a library called ENet. It is a reliable UDP library. The way it works is a polled event system like this: ENetEvent event; /* Wait up to 1000 milliseconds for an event. */ while (enet_host_service (client, & event, 1000) > 0) { switch (event.type) { case ENET_EVENT_TYPE_CONNECT: printf ("A new client connected from %x:%u.\n", event.peer -> address.host, event.peer -> address.port); /* Store any relevant client information here. */ event.peer -> data = "Client information"; break; case ENET_EVENT_TYPE_RECEIVE: printf ("A packet of length %u containing %s was received from %s on channel %u.\n", event.packet -> dataLength, event.packet -> data, event.peer -> data, event.channelID); /* Clean up the packet now that we're done using it. */ enet_packet_destroy (event.packet); break; case ENET_EVENT_TYPE_DISCONNECT: printf ("%s disconected.\n", event.peer -> data); /* Reset the peer's client information. */ event.peer -> data = NULL; } } It waits up to 1000 milliseconds for an event. If I'm hosting say 75 event driven card games and a lobby on the same thread as this code, will it cause any problems. If my understanding is correct, the process will simply sleep until there is an event, when there is one, it will process the event then come back here where potentially 5 or so events have queued up since so enet_host_services would return right away and not cause lag. I have been advised not to use multiple threads, will that be alright like this? Thanks

    Read the article

  • Is there a maximum delay an UDP packet can have?

    - by Jens Nolte
    I am currently implementing a real-time network protocol for a multiplayer game using UDP. I am not having any technical difficulties, but as I always have to care about late UDP packets I am wondering just how late they can arrive. I have researched the topic and have not found any mention of it, so I assume there is no technical limitation, but I wonder if common network/internet architecture (or hardware) gives an effective limitation of how late a UDP packet can be delivered.

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • Drawing simple geometric figures with DrawUserPrimitives?

    - by Navy Seal
    I'm trying to draw a simple triangle based on an array of vertex. I've been searching for a tutorial and I found a simple example on riemers but I couldn't get it to work. I think it was made for XNA 3 and it seems there were some changes to XNA 4? Using this example: http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series1/The_first_triangle.php I get this error: Additional information: The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing. I'm not english so I'm having some trouble to understand everything. For what I understand error is confusing because I'm trying to draw a triangle color based and not texture based and it shouldn't need a texture. Also I saw some articles about dynamic shadows and lights and I would like to know if this is the kind of code used to do it with some tweaks like culling because I'm wondering if its heavy code for performance in real time.

    Read the article

  • Send less Server Data with "AFK"

    - by Oliver Schöning
    I am working on a 2D (Realtime) MultiPlayer Game. With Construct2 and a Socket.IO JavaScript Server. Right now the code does not include the Array for each Player. var io = require("socket.io").listen(80); var x = 10; io.sockets.on("connection", function (socket) { socket.on("message", function(data) { x = x+1; }); }); setInterval(function() { io.sockets.emit("message", 'Pos,' + x); },100); I noticed a very annoying problem with my server today. It sends my X Coordinates every 100 milliseconds. The Problem was, that when I went into another Browser Tab, the Browser stopped the Game from running. And when I went back, I think the Game had to run through all the packages. Because my Offline Debugging Button still worked immediately and the Online Button only responded after some seconds. So then I changed my Code so that it would only send out an update when it received a player Input: var io = require("socket.io").listen(80); var x = 10; io.sockets.on("connection", function (socket) { socket.on("message", function(data) { x = x+1; io.sockets.emit("message", 'Pos,' + x); }); }); And it Updated Immediately, even when I had been inactive on the Browser Tab for a long time. Confirming my suspicion that it had to get through all the data. Confirm Please! It would be insane to only send information on Client Input in a Real Time Game. But how would I write a AFK function? I would think it is easier to run a AFK Boolean Loop on the Server. Here is what I need help for: playerArray[Me] if ( "Not Given any Input for X amount of Seconds" ) { "Don't send Data" } else { "Send Data" }

    Read the article

  • How do I generate terrain like that of Scorched Earth?

    - by alex
    Hi, I'm a web developer and I am keen to start writing my own games. For familiarity, I've chosen JavaScript and canvas element for now. I want to generate some terrain like that in Scorched Earth. My first attempt made me realise I couldn't just randomise the y value; there had to be some sanity in the peaks and troughs. I have Googled around a bit, but either I can't find something simple enough for me or I am using the wrong keywords. Can you please show me what sort of algorithm I would use to generate something in the example, keeping in mind that I am completely new to games programming (since making Breakout in 2003 with Visual Basic anyway)?

    Read the article

  • Pixel alignment algorithm

    - by user42325
    I have a set of square blocks, I want to draw them in a window. I am sure the coordinates calculation is correct. But on the screen, some squares' edge overlap with other, some are not. I remember the problem is caused by accuracy of pixels. I remember there's a specific topic related to this kind of problem in 2D image rendering. But I don't remember what exactly it is, and how to solve it. Look at this screenshot. Each block should have a fixed width margin. But in the image, the vertical white line have different width.Though, the horizontal lines looks fine.

    Read the article

  • How to link subprograms to a main program's game loop?

    - by Jim
    I recently discovered Crobot which is (briefly) a game where each player codes a virtual robot in a pseudo-C language. Each robot is then put in an arena where it fights against other robots. A robots' source code has this shape : /* Beginning file robot.r */ main() { while (1) { /* Do whatever you want */ ... move(); ... fire(); } } /* End file robot.r */ You can see that : The code is totally independent from any library/include Some predefined functions are available (move, fire, etc…) The program has its own game loop, and consequently is not called every frame My question is roughly : how does it work ? It seems that each robot's code is compiled by the main program and then used in a way I cannot understand. I thought it could yields a thread for each robot, but I have not any proof of this and it seems a bit complicated to achieve it. Any idea how it could work, someone ?

    Read the article

  • Using unordered_multimap as entity and component storage

    - by natebot13
    The Setup I've made a few games (more like animations) using the Object Oriented method with base classes for objects that extend them, and objects that extend those, and found I couldn't wrap my head around expanding that system to larger game ideas. So I did some research and discovered the Entity-Component system of designing games. I really like the idea, and thoroughly understood the usefulness of it after reading Byte54's perfect answer here: Role of systems in entity systems architecture. With that said, I have decided to create my current game idea using the described Entity-Component system. Having basic knowledge of C++, and SFML, I would like to implement the backbone of this entity component system using an unordered_multimap without classes for the entities themselves. Here's the idea: An unordered_mulitmap stores entity IDs as the lookup term, while the value is an inherited Component object. Examlpe: ____________________________ |ID |Component | ---------------------------- |0 |Movable | |0 |Accelable | |0 |Renderable | |1 |Movable | |1 |Renderable | |2 |Renderable | ---------------------------- So, according to this map of objects, the entity with ID 0 has three components: Movable, Accelable, and Renderable. These component objects store the entity specific data, such as the location, the acceleration, and render flags. The entity is simply and ID, with the components attached to that ID describing its attributes. Problem I want to store the component objects within the map, allowing the map have full ownership of the components. The problem I'm having, is I don't quite understand enough about pointers, shared pointers, and references in order to get that set up. How can I go about initializing these components, with their various member variables, within the unordered_multimap? Can the base component class take on the member variables of its child classes, when defining the map as unordered_multimap<int, component>? Requirements I need a system to be able to grab an entity, with all of its' attached components, and access members from the components in order to do the necessary calculations and reassignments for position, velocity, etc. Need a clarification? Post a comment with your concerns and I will gladly edit or comment back! Thanks in advance! natebot13

    Read the article

  • How AlphaBlend Blendstate works in XNA when accumulighting light into a RenderTarget?

    - by cubrman
    I am using a Deferred Rendering engine from Catalin Zima's tutorial: His lighting shader returns the color of the light in the rgb channels and the specular component in the alpha channel. Here is how light gets accumulated: Game.GraphicsDevice.SetRenderTarget(LightRT); Game.GraphicsDevice.Clear(Color.Transparent); Game.GraphicsDevice.BlendState = BlendState.AlphaBlend; // Continuously draw 3d spheres with lighting pixel shader. ... Game.GraphicsDevice.BlendState = BlendState.Opaque; MSDN states that AlphaBlend field of the BlendState class uses the next formula for alphablending: (source × Blend.SourceAlpha) + (destination × Blend.InvSourceAlpha), where "source" is the color of the pixel returned by the shader and "destination" is the color of the pixel in the rendertarget. My question is why do my colors are accumulated correctly in the Light rendertarget even when the new pixels' alphas equal zero? As a quick sanity check I ran the following code in the light's pixel shader: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); if (light4.a == 0) light4 = 0; return light4; This prevents lighting from getting accumulated and, subsequently, drawn on the screen. But when I do the following: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); return light4; The light is accumulated and drawn exactly where it needs to be. What am I missing? According to the formula above: (source x 0) + (destination x 1) should equal destination, so the "LightRT" rendertarget must not change when I draw light spheres into it! It feels like the GPU is using the Additive blend instead: (source × Blend.One) + (destination × Blend.One)

    Read the article

  • How do I implement collision detection with a sprite walking up a rocky-terrain hill?

    - by detectivecalcite
    I'm working in SDL and have bounding rectangles for collisions set up for each frame of the sprite's animation. However, I recently stumbled upon the issue of putting together collisions for characters walking up and down hills/slopes with irregularly curved or rocky terrain - what's a good way to do collisions for that type of situation? Per-pixel? Loading up the points of the incline and doing player-line collision checking? Should I use bounding rectangles in general or circle collision detection?

    Read the article

  • Why does the location of my vehicle spawner change when I open a matinee?

    - by Gareth Jones
    I'm doing work with InterActors and vehicle spawners in Unreal Tournament 3's editor, and have it set up like so: (The Walkway its on is the InterActor) However if I go in Kemsit and open the matinee that handles the InterActor, this happens: It does look to me like the editor is moving it out of the way so I can see the InterActor (which would be very clever) because only the image of the vehicle moves, not the gizmo, nor does the vehicle spawn in that location in game. Is this the case?

    Read the article

  • Do Apple and Google ask for a share if custom payment is done in a free app?

    - by user1590354
    I have a multiplatform game (web/iOS/Android) in the making. In the free version the core game is still fully playable but people who choose to pay will get more social features (and no ads, of course). I was thinking that rather than having a free and a paid version for all the platforms I may release the apps just for free and if the users want more, they have to register and pay a one-time fee (through a payment gateway or PayPal). The extra content would then be available in all the clients they have access to. Theoretically, this means a better value for the players and less maintenance and headache for me (obviously I have to handle all the payment troubles myself). Does it fit into the business model of Apple/Google? Or will they still claim their share of the registration fee?

    Read the article

  • XNA - Moving Background Calculations

    - by Jesse Emond
    Hi, My question is relatively hard to explain(for me, at least), so I'll go one step at a time and just tell me in the comments if it's not clear enough. So I'm making a "Defend Your Castle" type 2D game, where two players own a castle and create units that will move horizontally to try to destroy the opponent's base. Here's a screenshot of the game: The distance between both castles is much bigger in a real game though, bigger than the screen's width actually. Because the distance is bigger than the screen's width, I had to implement a simple 2D camera: Camera2D, which only holds a Location Vector2 (and I always make sure this camera is within the field area). Then, I just move all the game elements(castles, units, health bars) by that location, so that if a unit is at (5, 0), and the camera's location is (5, 0), then the unit's position will be moved by 5 units to the left, making it (0, 0) on the screen. At first, I simply used a static background with mountains and clouds(yeah, those are supposed to be mountains and clouds). Obviously, this looked awful: when you moved the camera, the background would stay immobile. Instead, I'd like to make a moving background, kind of a "scrolling" one. But rather than making a background with the same width as the distance between the castles, I'd like to make one that is a little bit smaller(but still bigger than the screen's width). I thought this would create an effect of "distance" with the background(but it might just look awful, too). Here's the background I'm testing with: I tried different ways, but none of them seems to work. I tried this: float backgroundFieldRatio = BackgroundTexture.Width / fieldWidth;//find the ratio between the background and the field. float backgroundPositionX = -cam.Location.X * backgroundFieldRatio;//move the background to the left When I run this with fieldWith = 1600, BackgroundTexture.Width = 1500 and while looking at the rightmost area, the background is offset to the left by a too big amount, and we can see the black clear color in the back, as you can see here: I hope I explained properly what I'm trying to achieve. Thank you for your time. Note: I didn't know what to look for on Google, so I thought I'd ask here.

    Read the article

  • Push or Pull Input Data In the Game Logic?

    - by Qua
    In the process of preparing my game for networking I'm adding a layer of seperation between the physical input (mouse/keyboard) and the actual game "engine"/logic. All input that has any relation to the game logic is wrapped inside action objects such as BuildBuildingAction. I was thinking of having an action processing layer that would determine what to do with the input. This layer could then be set up to either just pass the actions locally to the game engine or send it via sockets to the network server depending on whether the game was single- or multiplayer. In network games it would make sense that the player's actions should be sent to the server, but should the game logic be pulling (polling?) the data through some sort of interface or should the action processing layer be adding the actions to an input queue in the game logic code?

    Read the article

  • Achieving more fluent movement

    - by Robin92
    I'm working on my first OpenGL 2D game and I've just locked the framerate of my game. However, the way objects move is far from satisfying: they tend to lag, which is shown in this video. I've thought how more fluent animation can be achieved and started getting segmentation faults due to accessing the same object by two different threads. I've tried the following threads' setting: Drawing, creating new objects Moving player, moving objects, deleting objects Currently my application uses this setting: Drawing, creating new objects, moving objects, deleting object Moving player Any ideas would be appreciated. EDIT: I've tried increasing the FPS limit but lags are noticeable even at 200 fps.

    Read the article

  • Efficient mapping layout in 2D side-scroller, and collisions between character and the world

    - by Jack
    I haven't touched Visual Studio for a couple months now, but I was playing a game from the '90s toady and had an epiphany: I was looking for something what i didn't need, and I wasn't using what I knew correctly. One of those realizations was collision, so let me tell you a bit about my project that I was working on. The project's graphics looks like Mario or Dangerous Dave, etc., you get the idea - old-school pixels. So anyway I remember trying to think of something else than AABB for character form, but I couldn't think of anything. Perhaps I could get a suggestion for this? Another thing is the world - I don't want it to be just linear world, I want mountains, etc.. My idea is to use triangles, and no idea yet what to do if I want just part of the cube, say 3/4 or 2/4 or whatever. Hard-coding such things seems inefficient. P.S. I am not looking at the precision level offered by Box2D. Actually I remember trying to implement it at first, but I failed as my understanding of C++ wasn't advanced enough, as it'll be mentioned below. P.P.S. I am programming in C++, and I haven't done it for a couple months now. I have no means of testing it either, as my PC is broken down, and this one can barely run games from late '90s, not to speak about a compiler or a program with inefficient resource management... I am also not an expert (obviously), I don't even know if I can consider myself an average programmer. In short, I am simply curious about my thoughts and my past experience when programming the game. I may come back to it when my PC is fixed, I'm already filling a note about these things.

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >