Search Results

Search found 32375 results on 1295 pages for 'dnn module development'.

Page 481/1295 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • OpenGL flickerinng near the edges

    - by Daniel
    I am trying to simulate particles moving around the scene with OpenCL for computation and OpenGL for rendering with GLUT. There is no OpenCL-OpenGL interop yet, so the drawing is done in the older fixed pipeline way. Whenever circles get close to the edges, they start to flicker. The drawing should draw a part of the circle on the top of the scene and a part on the bottom. The effect is the following: The balls you see on the bottom should be one part on the bottom and one part on the top. Wrapping around the scene, so to say, but they constantly flicker. The code for drawing them is: void Scene::drawCircle(GLuint index){ glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(pos.at(2*index),pos.at(2*index+1), 0.0f); glBegin(GL_TRIANGLE_FAN); GLfloat incr = (2.0 * M_PI) / (GLfloat) slices; glColor3f(0.8f, 0.255f, 0.26f); glVertex2f(0.0f, 0.0f); glColor3f(1.0f, 0.0f, 0.0f); for(GLint i = 0; i <=slices; ++i){ GLfloat x = radius * sin((GLfloat) i * incr); GLfloat y = radius * cos((GLfloat) i * incr); glVertex2f(x, y); } glEnd(); } If it helps, this is the reshape method: void Scene::reshape(GLint width, GLint height){ if(0 == height) height = 1; //Prevent division by zero glViewport(0, 0, width, height); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(xmin, xmax, ymin, ymax); std::cout << xmin << " " << xmax << " " << ymin << " " << ymax << std::endl; }

    Read the article

  • How can I solve this SAT edge case?

    - by ssb
    I have an SAT implementation that basically works, and the fact that it works is what's giving me a few headaches. Basically there are some situations where using the SAT doesn't quite give me my intended result. One of these involves movement across multiple collision objects. Or to put it another way, if I have several collision boxes lined up next to each other such as to create something like a wall or a floor, movement along that surface while constantly applying force into that surface sometimes causes hangups, i.e. the player stops moving. This illustration shows what I mean: The 2 boxes on the bottom represent a floor, and the box on top/in the middle represents what my player is doing. There are several squares lined up as world obstacles to create some kind of wall, and if I move to the left across this surface while holding the down key then the issue arises. It only happens at the exact dividing point between two blocks. It only happens when moving to the left. At any rate I think I know why it happens, but I don't know how to solve it. Basically when I update my player movement I consider which directions are pressed, naturally, so if down is pressed I will add the speed to the Y component, and so on. But due to the way my SAT is implemented, when the penetration into the shape is the same from both sides it just goes with the smallest axis that it finds first, and it checks collisions against objects in the order that they were created because it goes through a foreach loop on the list of collidable objects. So this all adds up to the effect of if I'm moving to the left over a series of boxes while holding down, it will resolve me back to the right out of the first box and then up out of the box to the right of it, and this continues as long as the penetration is the same. The odd part is that this doesn't happen every time, which I am going to attribute to some oddity regarding multiplying velocity by the game time and causing some minor discrepancies between the lengths. Ultimately what this boils down to is that it will keep resolving me to the right and up, but this is technically expected behavior. All the solutions I can think of only address the symptoms of this problem and not the actual cause, such as not using many blocks to create walls or shapes, which is an option I'd like to keep open. I could also change which axis my algorithm defaults to, but that would just cause problems when going up/down along the walls. What can I do to fix this?

    Read the article

  • Transformation matrix that maps a window

    - by gbhall
    I'm currently learning OpenGL at uni, and they give us questions to help us learn (these are not worth anything), however I'm stuck on this one question and would have to travel over an hour and a half to uni for an answer. How do I do this question? Please include as many steps as you can, I want to be able to follow exactly how to do this. Find the transformation that maps a window whose lower left corner is at (1,1) and upper right corner is at (3,5) onto: The entire device screen whose dimension is (600, 500) A viewport that has lower left corner at (100,100) and upper right corner at (400,400) Edit: Damn sorry I should have added I am meant to find the matrix, so no code.

    Read the article

  • Multi Pass Blend

    - by Kirk Patrick
    I am seeking the simplest working example of a two pass HLSL pixel shader. It can do anything really, but the main idea is to perform "ping ponging" to take the output of the first pass and then send it for the second pass. In my example I want to draw to the R channel and then draw to the G channel and produce a simple Venn Diagram in the shader, but need to detect overlap. I can currently detect one or the other but not overlap. There are a red and green circle overlapping, and I want to put a dynamic texture map in the overlap region. I can currently put it in either or. Below is how it looks in the shader. -------------------------------- Texture2D shaderTexture; SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex0 : TEXCOORD0; float2 tex1 : TEXCOORD1; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float4 textureColor0; float4 textureColor1; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor0 = shaderTexture.Sample(SampleType, input.tex0); textureColor1 = shaderTexture.Sample(SampleType, input.tex1); if (input.color[0]==1.0f && input.color[1]==1.0f) // Requires multi-pass textureColor0 = textureColor1; return textureColor0; } Here is the calling code (that needs to be modified) m_d3dContext->IASetVertexBuffers(0, 2, vbs, strides, offsets); m_d3dContext->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R32_UINT,0); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader(m_vertexShader.Get(), nullptr, 0); m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf()); m_d3dContext->PSSetShader(m_pixelShader.Get(), nullptr, 0); m_d3dContext->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); m_d3dContext->PSSetSamplers(0, 1, m_QuadsTexSamplerState.GetAddressOf());

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • Why can we recognize game engines?

    - by Bart van Heukelom
    About many games you can say "oh that's the Unreal engine for sure", "this was made by upgrading GTA 4", etc. We can often recognize the engine used for a game just by looking at its graphics (disregarding menus and such). I'm wondering, why is this? All game engines use the same 3D rendering technology that we all use, and the different games usually have a distinct art style, so what's left to recognize?

    Read the article

  • How can I make an MMORPG appeal to casual players?

    - by Philipp
    I believe that there is a significant market of players who would enjoy the exploration and interaction aspects of MMORPGs, but simply don't have the time for the endless grinding marathons which are part of the average MMORPG. MMORPGs are all about interaction between players. But when different players have different amounts of time to invest into a game, those with less time to spend will soon lack behind their power-leveling friends and won't be able to interact with them anymore. One way to solve this would be to limit the progress a player can achieve per day, so that it simply doesn't make sense to play more than one or two hours a day. But even the busiest casual players sometimes like to spend a whole sunday afternoon playing a video game. Just stopping them after two hours would be really frustrating. It also creates a pressure to use the daily progress limit every day, because otherwise the player would feel like wasting something. This pressure would be detrimental for casual gamers. What else could be done to level the playing field between those players who play 40+ hours a week and those who can't play more than 10?

    Read the article

  • Recommended method towards making custom maps for a 2d game?

    - by Qasim
    I am planning on making a 2D game, however different from my last personal projects I want this one to have enhanced graphics, with custom-designed levels. My previous 2d platformers were tile-based, in which I made a map editor for to create levels. However, I am wondering the best way to implement custom designed maps? For say, some grass is a litter higher than others, flowers here and there, cool drawings and structures along the way, etc. instead of just the same old tiles over and over again. I am thinking but I just can't grasp the idea of how to implement it. I have seen it done in other games and am interested to see how they accomplish it, but can't get my hands on some source code. :(

    Read the article

  • Rotation based on x coordinate and x velocity?

    - by Lewis
    -(void) accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { float deceleration = 0.3f, sensitivity = 8.0f, maxVelocity = 150; // adjust velocity based on current accelerometer acceleration playerVelocity.x = playerVelocity.x * deceleration + acceleration.x * sensitivity; // we must limit the maximum velocity of the player sprite, in both directions (positive & negative values) playerVelocity.x = fmaxf(fminf(playerVelocity.x, maxVelocity), -maxVelocity); } Hi, I want to rotate my sprite based on the velocity and accelerometer input. My sprite can move along the X axis like so: <--------- sprite ----------- But it always faces forwards, if it is moving left I want it to point slightly to the left, the degree of how far it is pointing to be judged from the velocity. This should also work for the right. I tried using atan but as the y velocity and position is always the same the function returns 0, which doesn't rotate it at all. Any ideas? Regards, Lewis.

    Read the article

  • how can I specify interleaved vertex attributes and vertex indices

    - by freefallr
    I'm writing a generic ShaderProgram class that compiles a set of Shader objects, passes args to the shader (like vertex position, vertex normal, tex coords etc), then links the shader components into a shader program, for use with glDrawArrays. My vertex data already exists in a VertexBufferObject that uses the following data structure to create a vertex buffer: class CustomVertex { public: float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float) float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float) float m_Normal[3]; // nx, ny, nz; float colour[4]; // r, g, b, a float padding[20]; // padded for performance }; I've already written a working VertexBufferObject class that creates a vertex buffer object from an array of CustomVertex objects. This array is said to be interleaved. It renders successfully with the following code: void VertexBufferObject::Draw() { if( ! m_bInitialized ) return; glBindBuffer( GL_ARRAY_BUFFER, m_nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, m_nVboIdIndex ); glEnableClientState( GL_VERTEX_ARRAY ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glEnableClientState( GL_NORMAL_ARRAY ); glEnableClientState( GL_COLOR_ARRAY ); glVertexPointer( 3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 0) ); glTexCoordPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 12)); glNormalPointer(GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 20)); glColorPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 32)); glDrawElements( GL_TRIANGLES, m_nNumIndices, GL_UNSIGNED_INT, ((char*)NULL + 0) ); glDisableClientState( GL_VERTEX_ARRAY ); glDisableClientState( GL_TEXTURE_COORD_ARRAY ); glDisableClientState( GL_NORMAL_ARRAY ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex Array object is as follows. This is performed before the ShaderProgram runtime linking stage, and no glErrors are reported after its steps. // Specify the shader arg locations (e.g. their order in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which stores the relationship between // the buffer and the input attributes glGenVertexArrays( 1, &m_nVaoHandle ); glBindVertexArray( m_nVaoHandle ); // Enable the vertex attribute array (we're using interleaved array, since its faster) glBindBuffer( GL_ARRAY_BUFFER, vShaderArgs[0].nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vShaderArgs[0].nVboIndexId ); // vertex data for( int n = 0; n < vShaderArgs.size(); n ++ ) { glEnableVertexAttribArray(n); glVertexAttribPointer( n, vShaderArgs[n].nFieldSize, GL_FLOAT, GL_FALSE, vShaderArgs[n].nStride, (GLubyte *) NULL + vShaderArgs[n].nFieldOffset ); AppLog::Ref().OutputGlErrors(); } This doesn't render correctly at all. I get a pattern of white specks onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering: void ShaderProgram::Draw() { using namespace AntiMatter; if( ! m_nShaderProgramId || ! m_nVaoHandle ) { AppLog::Ref().LogMsg("ShaderProgram::Draw() Couldn't draw object, as initialization of ShaderProgram is incomplete"); return; } glUseProgram( m_nShaderProgramId ); glBindVertexArray( m_nVaoHandle ); glDrawArrays( GL_TRIANGLES, 0, m_nNumTris ); glBindVertexArray(0); glUseProgram(0); } Can anyone see errors or omissions in either the VAO creation code or rendering code? thanks!

    Read the article

  • Android - Rendering HUD View to SurfaceView

    - by Jon
    I have developed a relatively simple game in android, to get my head around it all, and on the back of it developed a crude game engine (in the loosest sense!). I use a SurfaceView and canvas (no OpenGL) - I'll cross that bridge another time! I have implemented a game HUD, title screens etc. by overlaying standard Android view widgets over my SurfaceView. This all works reasonably well maintaining an acceptable frame-rate, but it is a simple game with not a lot happening on or off screen. What I am wondering now is whether one could (and whether one would get any advantage by) drawing all my views to the one SurfaceView, all controlled by the main game thread. At the moment I have handlers flinging messages around and runOnUiThreads here, there and everywhere. Quite cumbersome. Any thoughts on this would be much appreciated (before I perhaps waste time trying to do it!)

    Read the article

  • Are there existing FOSS component-based frameworks?

    - by Tesserex
    The component based game programming paradigm is becoming much more popular. I was wondering, are there any projects out there that offer a reusable component framework? In any language, I guess I don't care about that. It's not for my own project, I'm just curious. Specifically I mean are there projects that include a base Entity class, a base Component class, and maybe some standard components? It would then be much easier starting a game if you didn't want to reinvent the wheel, or maybe you want a GraphicsComponent that does sprites with Direct3D, but you figure it's already been done a dozen times. A quick Googling turns up Rusher. Has anyone heard of this / does anyone use it? If there are no popular ones, then why not? Is it too difficult to make something like this reusable, and they need heavy customization? In my own implementation I found a lot of boilerplate that could be shoved into a framework.

    Read the article

  • How to expose game data in the game without a singelton?

    - by zardon
    I'm quite new to cocos2d and games programming, and am currently I am writing a game that is currently in Prototype stage. Everything is going okay, but I've realized a potentially big problem and I am not sure how to solve it. I am using a singelton to store a bunch of arrays for everything, a global list of planets, a global list of troops, a global list of products, etc. And only now I'm realizing that all of this will be in memory and this is the wrong way to do it. I am not storing files or anything on the disk just yet, with exception to a save/load state, which is a capture of everything. My game makes use of a map which allows you to select a planet, then it will give you a breakdown of that planets troops and resources, Lets use this scenario: My game has 20 planets. On which you can have 20 troops. Straight away that's an array of 400! This does not add the NPC, which is another 10. So, 20x10 = 200 So, now we have 600 all in arrays inside a Singelton. This is obviously very bad, and very wrong. Especially as the game scales in the amount of data. But I need to expose pretty much everything, especially on the map page, and I am not sure how else to do it. I've been told that I can use a controller for the map page which has the information I need for each planet, and other controllers for other items I require global display for. I've also thought about storing each planet's data in a save file, using initWithCoder however there could be a boatload of files on the user's device? I really don't want to use a database, mainly because I would need to translate NSObjects and non-NSObjects like CGRects and CGPoints and Colors into/from SQL. I am open to other ideas on how to store and read game data to prevent using a singelton to store everything, everywhere. Thanks for your time.

    Read the article

  • How to choose cell to put entity in in an uniform grid used for broad phase collision detection?

    - by nathan
    I'm trying to implement the broad phase of my collision detection algorithm. My game is an arcade game with lot of moving entities in an open space with relatively equivalent sizes. Regarding the above specifications, i decided to use an uniform grid for space partitioning. The problem i have right know is how to efficiently choose in which cells an entity should be added. ATM i'm doing something like this: for (int x = 0; x < gridSize; x++) { for (int y = 0; y < gridSize; y++) { GridCell cell = grid[x][y]; cell.clear(); //remove the previously added entities for (int i = 0; i < entities.size(); i++) { Entity e = entities.get(i); if (cell.isEntityOverlap(e)) { cell.add(e); } } } } The isEntityOverlap is a simple method i added my GridCell class. public boolean isEntityOverlap(Shape s) { return cellArea.intersects(s); } Where cellArea is a Rectangle. cellArea = new Rectangle(x, y, CollisionGrid.CELL_SIZE, CollisionGrid.CELL_SIZE); It works but it's damn slow. What would be a fast way to know all the cells an entity overlaps? Note: by "it works" i mean, the entities are contained in the good cells over the time after movements etc.

    Read the article

  • Math major as a viable degree

    - by Zak O'Keefe
    While I realize there are many topics about CS vs software engineering vs game school programs, I haven't found anything relating to whether pure math degrees (with CS minor and electives) would also be a viable program. By this I mean: Would having a math major, CS minor put one at competitive disadvantage as compared to a pure CS program? This relates specifically to game engine programming, more on the graphics side. Background (for those who care): Currently a math major, CS minor at school and looking to land a career doing graphics engine programming. Admittedly, I love math and if at all possible would like to stay my current program as long as it doesn't put me at a competitive disadvantage trying to land a job post-graduation. That being said, I'm strong in the traditional C/C++ languages, strong concurrent programming skills, and currently produce self-made games for iOS. As an employer, how badly is the math major hurting me? Just want to get some advice from people already in the field!

    Read the article

  • Setting Anchor Point

    - by Siddharth
    I want to set anchor point for the sprite like cocos2d has done for their implementation. I do not found any thing like that in andengine so please provide guidance on that. I want to move the sprite on touch so I use following code but that does not work for me. super.setPosition(pX - this.getWidthScaled() / 2, pY - this.getHeightScaled() / 2); Because I touch on the corner of the image but automatically it comes at center of the image because of above code. I want to remain the touch at desire position and drag it. For me the anchor point became useful. But I don't found anything in andengine.

    Read the article

  • Need ideas for an algorithm to draw irregular blotchy shapes

    - by Yttermayn
    I'm looking to draw irregular shapes on an x,y grid, and I'd like to come up with a simple, fast method if possible. My only idea so far is to draw a bunch of circles of random sizes very near each other, but at a random distance apart from a more or less central coordinate, then fill in any blank spaces. I realize this is a clunky, inelegant method, hopefully it will give you a rough idea of the kinds of rounded, random blotchy shapesI'm shooting for. Please suggest methods to accomplish this, I'm not so much interested in code. I can noodle that part out myself. Thanks!

    Read the article

  • Seek Steering Behavior with Target Direction for Group of Fighters

    - by SebastianStehle
    I am implementing steering algorithms with group management for spaceships (fighters). I select a leader and assign the target positions for the other spaceships based on the target position of the leader and an offset. This works well. But when my spaceships arrive they all have a different direction. I want them to keep to look in the same direction (target - start). I also want to combine this behavior with a minimum turning radius that is based on the speed. The only idea I have is to calculate a path for each spaceship with an point before the target position, so the ships have some time left to turn into the right position. But I dont know if this is a good idea. I guess there will be a lot of rare cases where this can cause a problem. So the question is, if anybody knows how to solve this problem and has some (simple code) or pseudocode for me or at least some good explanation.

    Read the article

  • A*, Tile costs and heuristic; How to approach

    - by Kevin Toet
    I'm doing exercises in tile games and AI to improve my programming. I've written a highly unoptimised pathfinder that does the trick and a simple tile class. The first problem i ran into was that the heuristic was rounded to int's which resulted in very straight paths. Resorting a Euclidian Heuristic seemed to fixed it as opposed to use the Manhattan approach. The 2nd problem I ran into was when i tried added tile costs. I was hoping to use the value's of the flags that i set on the tiles but the value's were too small to make the pathfinder consider them a huge obstacle so i increased their value's but that breaks the flags a certain way and no paths were found anymore. So my questions, before posting the code, are: What am I doing wrong that the Manhatten heuristic isnt working? What ways can I store the tile costs? I was hoping to (ab)use the enum flags for this The path finder isnt considering the chance that no path is available, how do i check this? Any code optimisations are welcome as I'd love to improve my coding. public static List<Tile> FindPath( Tile startTile, Tile endTile, Tile[,] map ) { return FindPath( startTile, endTile, map, TileFlags.WALKABLE ); } public static List<Tile> FindPath( Tile startTile, Tile endTile, Tile[,] map, TileFlags acceptedFlags ) { List<Tile> open = new List<Tile>(); List<Tile> closed = new List<Tile>(); open.Add( startTile ); Tile tileToCheck; do { tileToCheck = open[0]; closed.Add( tileToCheck ); open.Remove( tileToCheck ); for( int i = 0; i < tileToCheck.neighbors.Count; i++ ) { Tile tile = tileToCheck.neighbors[ i ]; //has the node been processed if( !closed.Contains( tile ) && ( tile.flags & acceptedFlags ) != 0 ) { //Not in the open list? if( !open.Contains( tile ) ) { //Set G int G = 10; G += tileToCheck.G; //Set Parent tile.parentX = tileToCheck.x; tile.parentY = tileToCheck.y; tile.G = G; //tile.H = Math.Abs(endTile.x - tile.x ) + Math.Abs( endTile.y - tile.y ) * 10; //TODO omg wtf and other incredible stories tile.H = Vector2.Distance( new Vector2( tile.x, tile.y ), new Vector2(endTile.x, endTile.y) ); tile.Cost = tile.G + tile.H + (int)tile.flags; //Calculate H; Manhattan style open.Add( tile ); } //Update the cost if it is else { int G = 10;//cost of going to non-diagonal tiles G += map[ tile.parentX, tile.parentY ].G; //If this path is shorter (G cost is lower) then change //the parent cell, G cost and F cost. if ( G < tile.G ) //if G cost is less, { tile.parentX = tileToCheck.x; //change the square's parent tile.parentY = tileToCheck.y; tile.G = G;//change the G cost tile.Cost = tile.G + tile.H + (int)tile.flags; // add terrain cost } } } } //Sort costs open = open.OrderBy( o => o.Cost).ToList(); } while( tileToCheck != endTile ); closed.Reverse(); List<Tile> validRoute = new List<Tile>(); Tile currentTile = closed[ 0 ]; validRoute.Add( currentTile ); do { //Look up the parent of the current cell. currentTile = map[ currentTile.parentX, currentTile.parentY ]; currentTile.renderer.material.color = Color.green; //Add tile to list validRoute.Add( currentTile ); } while ( currentTile != startTile ); validRoute.Reverse(); return validRoute; } And my Tile class: [Flags] public enum TileFlags: int { NONE = 0, DIRT = 1, STONE = 2, WATER = 4, BUILDING = 8, //handy WALKABLE = DIRT | STONE | NONE, endofenum } public class Tile : MonoBehaviour { //Tile Properties public int x, y; public TileFlags flags = TileFlags.DIRT; public Transform cachedTransform; //A* properties public int parentX, parentY; public int G; public float Cost; public float H; public List<Tile> neighbors = new List<Tile>(); void Awake() { cachedTransform = transform; } }

    Read the article

  • Is there a maximum delay an UDP packet can have?

    - by Jens Nolte
    I am currently implementing a real-time network protocol for a multiplayer game using UDP. I am not having any technical difficulties, but as I always have to care about late UDP packets I am wondering just how late they can arrive. I have researched the topic and have not found any mention of it, so I assume there is no technical limitation, but I wonder if common network/internet architecture (or hardware) gives an effective limitation of how late a UDP packet can be delivered.

    Read the article

  • Is there a simpler way to create a borderless window with XNA 4.0?

    - by Cypher
    When looking into making my XNA game's window border-less, I found no properties or methods under Game.Window that would provide this, but I did find a window handle to the form. I was able to accomplish what I wanted by doing this: IntPtr hWnd = this.Window.Handle; var control = System.Windows.Forms.Control.FromHandle( hWnd ); var form = control.FindForm(); form.FormBorderStyle = System.Windows.Forms.FormBorderStyle.None; I don't know why but this feels like a dirty hack. Is there a built-in way to do this in XNA that I'm missing?

    Read the article

  • How do I apply skeletal animation from a .x (Direct X) file?

    - by Byte56
    Using the .x format to export a model from Blender, I can load a mesh, armature and animation. I have no problems generating the mesh and viewing models in game. Additionally, I have animations and the armature properly loaded into appropriate data structures. My problem is properly applying the animation to the models. I have the framework for applying the models and the code for selecting animations and stepping through frames. From what I understand, the AnimationKeys inside the AnimationSet supplies the transformations to transform the bind pose to the pose in the animated frame. As small example: Animation { {Armature_001_Bone} AnimationKey { 2; //Position 121; //number of frames 0;3; 0.000000, 0.000000, 0.000000;;, 1;3; 0.000000, 0.000000, 0.005524;;, 2;3; 0.000000, 0.000000, 0.022217;;, ... } AnimationKey { 0; //Quaternion Rotation 121; 0;4; -0.707107, 0.707107, 0.000000, 0.000000;;, 1;4; -0.697332, 0.697332, 0.015710, 0.015710;;, 2;4; -0.684805, 0.684805, 0.035442, 0.035442;;, ... } AnimationKey { 1; //Scale 121; 0;3; 1.000000, 1.000000, 1.000000;;, 1;3; 1.000000, 1.000000, 1.000000;;, 2;3; 1.000000, 1.000000, 1.000000;;, ... } } So, to apply frame 2, I would take the position, rotation and scale from frame 2, create a transformation matrix (call it Transform_A) from them and apply that matrix the vertices controlled by Armature_001_Bone at their weights. So I'd stuff TransformA into my shader and transform the vertex. Something like: vertexPos = vertexPos * bones[ int(bfs_BoneIndices.x) ] * bfs_BoneWeights.x; Where bfs_BoneIndices and bfs_BoneWeights are values specific to the current vertex. When loading in the mesh vertices, I transform them by the rootTransform and the meshTransform. This ensures they're oriented and scaled correctly for viewing the bind pose. The problem is when I create that transformation matrix (using the position, rotation and scale from the animation), it doesn't properly transform the vertex. There's likely more to it than just using the animation data. I also tried applying the bone transform hierarchies, still no dice. Basically I end up with some twisted models. It should also be noted that I'm working in openGL, so any matrix transposes that might need to be applied should be considered. What data do I need and how do I combine it for applying .x animations to models?

    Read the article

  • Game World Design [on hold]

    - by GameDev
    I have one doubt about world game developing. I want to do a kind of platform game mixed with RPG (Side Scroll). What's the best to draw the world, - Draw everything than use the camera to move around the world - Draw just what you see as the player moves draw the new stuff. I'm new at this and didn't had any course for it. So if anyone can help me thanks :) PS: Any recommendation to learning game concept, like drawing world theory, play etc.. (not code and i want to 2D and i only see books for 3D stuff)

    Read the article

  • Creating a newspaper that effects the game's economy?

    - by zardon
    I am writing a game in Objective C/cocos2d where a newspaper is a central part of what controls or rather effects the game's world economy as well as what a city might do (such as increase X, reduce Y) The newspaper is a bit like a "Chance card" in Monopoly, it has an effect on something. My question is, what is the best way to do write a newspaper that has both a random and specific effect within the game. Would the best strategy be to write out all the things a newspaper can affect, a PLIST of headlines (with placeholders). I think Tiny Tower uses a PLIST of events and it randomly picks an event, but I'm not sure how it actually parses it because certain events do different things. But then how do I parse all the scenarios that a newspaper can deliver? A big switch statement seems very long and complicated to do. I am wondering if there is a simpler way to handle this kind of thing. Related to this is that there might be no news that day and I'm not sure what the newspaper should display, should it just display the last headline? So, in summary. 1) A newspaper generates a headline, it affects different things, such as the world economy, prices, how city reacts 2) I need the newspaper to generate headlines (although there may be days when there are no headlines at all), but I am not sure how to parse it without using a big-ass switch statement. Thanks in advance.

    Read the article

  • Game editor integration with the engine?

    - by Daniel
    What I am trying to figure out is what is the best way to integrate the editor(level, effects, model, etc...) in the most effective way? Now the first thing I thought would be to create the game engine(*) extremely modular. For example I took the example of game states. You could have multiple game states that all have their own update() and draw() methods among others. Each game state class would inherit from a base GameState class. This allows for a more modular approach and a useful one at that. Now would the most efficient approach be to implement the editor along with the modular engine, or create two different designs for both the game, and editor? I thought to take the game state example and extend it to window states, and well could be used for a lot more systems. Is there a better implementation of this design(game state) for use in other systems used in the engine? *: Now I know the term game engine is sorta irrelevant, and misused in many situations. What I am referring to as the "game engine" is the combination of the systems that the game must interact with for short. Also this is more of a theory / design question than an implementation. Even though both mix, i'd rather like to have a more general idea on how the editor is built in an efficient way and still using the same engine code as what the game uses. Thanks, Daniel P.S If you need more clarification or extra bits just leave a comment.

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >