Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 557/1274 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • How do I create a camera?

    - by Morphex
    I am trying to create a generic camera class for a game engine, which works for different types of cameras (Orbital, GDoF, FPS), but I have no idea how to go about it. I have read about quaternions and matrices, but I do not understand how to implement it. Particularly, it seems you need "Up", "Forward" and "Right" vectors, a Quaternion for rotations, and View and Projection matrices. For example, an FPS camera only rotates around the World Y and the Right Axis of the camera; the 6DoF rotates always around its own axis, and the orbital is just translating for a set distance and making it look always at a fixed target point. The concepts are there; implementing this is not trivial for me. SharpDX seems to have has already Matrices and Quaternions implemented, but I don't know how to use them to create a camera. Can anyone point me on what am I missing, what I got wrong? I would really enjoy if you could give a tutorial, some piece of code, or just plain explanation of the concepts.

    Read the article

  • Different ways to pass Textures into HLSL shaders

    - by codymanix
    The GraphicsDevice class of xna 4 has the properties Textures and VertexTextures. What is the exact difference? I don't really understand what MSDN tells me about this. I usually use Effect parameters to pass textures to my HLSL shaders. What are the differences between these methods, which is faster? My Scenario: I am working on a minecraft like game, which means lots of separate DrawPrimitives calls and change current Texture often since I have lots of different block types. Since I use an Octtree to organize the world, I cannot easily sort by texture.

    Read the article

  • Adding 2D vector movement with rotation applied

    - by Michael Zehnich
    I am trying to apply a slight sine wave movement to objects that float around the screen to make them a little more interesting. I would like to apply this to the objects so that they oscillate from side to side, not front to back (so the oscillation does not affect their forward velocity). After reading various threads and tutorials, I have come to the conclusion that I need to create and add vectors, but I simply cannot come up with a solution that works. This is where I'm at right now, in the object's update method (updated based on comments): Vector2 oldPosition = new Vector2(spritePos.X, spritePos.Y); //note: newPosition is initially set in the constructor to spritePos.x/y Vector2 direction = newPosition - oldPosition; Vector2 perpendicular = new Vector2(direction.Y, -direction.X); perpendicular.Normalize(); sinePosAng += 0.1f; perpendicular.X += 2.5f * (float)Math.Sin(sinePosAng); spritePos.X += velocity * (float)Math.Cos(radians); spritePos.Y += velocity * (float)Math.Sin(radians); spritePos += perpendicular; newPosition = spritePos;

    Read the article

  • How to mimic the same fixed-size horizon as in this racing game?

    - by Aybe
    I am trying to replicate the same horizon (buildings and sky) as in the image below: As you can see, the player has advanced in a straight line, yet the horizon has still the same size: This is my attempt using 3D, while it's okay when the player is on the start line: It's not so great when the player advanced as much as in the image no. 2: This is an overview of where the horizon buildings and sky are located: Obviously this won't achieve such effect when one is close to it, so I've tried to scale up the horizon on all axes but the problem is that the buildings are too small depending where you look at them from. How can one mimic such rendering ?

    Read the article

  • CW/CCW Rotation of a Vector

    - by user23132
    Considering that I have a vector A, and after an arbitrary rotation I get vector B. I want to use this rotation operation in others vectors as well, but I'm having problems in doing that. My idea do that is to calculate the perpendicular vector C of the plane AB (by calculating AxB). This vector C is the axis that I'll need to rotate. To discover the angle I used the dot product between A and B, the acos of the dot product will return the lowest angle between A and B, the angle ang. The rotation I need to do is then: -rotate *ang*º around the C axis. The problem is that I dont know if this rotation is a CW or CCW rotation, since the cos of the dot product does not give me information of the sign of the angle. There's a tip discover that in 2D ( A.x * B.y - A.y * B.x) that you can use to discover if the vector A is at left/right of vector B. But I dont know how to do this in 3D space. Can anyone help me?

    Read the article

  • SFML 2.0 Too Many Variables in Class Preventing Draw To Screen

    - by Josh
    This is a very strange phenomenon to me. I have a class definition for a game, but when I add another variable to the class, the draw method does not print everything to the screen. It will be easier understood showing the code and output. Code for good draw output: class board { protected: RectangleShape rect; int top, left; int i, j; int rowSelect, columnSelect; CircleShape circleArr[4][10]; CircleShape codeArr[4]; CircleShape keyArr[4][10]; //int pegPresent[4]; public: board(void); void draw(RenderWindow& Window); int mouseOver(RenderWindow& Window); void placePeg(RenderWindow& Window, int pegSelect); }; Screen: Code for missing draw: class board { protected: RectangleShape rect; int top, left; int i, j; int rowSelect, columnSelect; CircleShape circleArr[4][10]; CircleShape codeArr[4]; CircleShape keyArr[4][10]; int pegPresent[4]; public: board(void); void draw(RenderWindow& Window); int mouseOver(RenderWindow& Window); void placePeg(RenderWindow& Window, int pegSelect); }; Screen: As you can see, all I do is un-comment the protected array and most of the pegs are gone from the right hand side. I have checked and made sure that I didn't accidentally created a variable with that name already. I haven't used it anywhere. Why does it not draw the remaining pegs as it should? My only thought is that maybe I am declaring too many variables for the class, but that doesn't really make sense to me. Any thoughts and help is greatly appreciated.

    Read the article

  • My frustum culling is culling from the wrong point [SOLVED]

    - by Xbetas
    I'm having problems with my frustum being in the wrong origin. It follows the rotation of my camera but not the position. In my camera class I'm generating a view-matrix: void Camera::Update() { UpdateViewMatrix(); glMatrixMode(GL_MODELVIEW); //glLoadIdentity(); glLoadMatrixf(GetViewMatrix().m); } Then extracting the planes using the projection matrix and modelview matrix: void UpdateFrustum() { Matrix4x4 projection, model, clip; glGetFloatv(GL_PROJECTION_MATRIX, projection.m); glGetFloatv(GL_MODELVIEW_MATRIX, model.m); clip = model * projection; m_Planes[RIGHT][0] = clip.m[ 3] - clip.m[ 0]; m_Planes[RIGHT][1] = clip.m[ 7] - clip.m[ 4]; m_Planes[RIGHT][2] = clip.m[11] - clip.m[ 8]; m_Planes[RIGHT][3] = clip.m[15] - clip.m[12]; NormalizePlane(RIGHT); m_Planes[LEFT][0] = clip.m[ 3] + clip.m[ 0]; m_Planes[LEFT][1] = clip.m[ 7] + clip.m[ 4]; m_Planes[LEFT][2] = clip.m[11] + clip.m[ 8]; m_Planes[LEFT][3] = clip.m[15] + clip.m[12]; NormalizePlane(LEFT); m_Planes[BOTTOM][0] = clip.m[ 3] + clip.m[ 1]; m_Planes[BOTTOM][1] = clip.m[ 7] + clip.m[ 5]; m_Planes[BOTTOM][2] = clip.m[11] + clip.m[ 9]; m_Planes[BOTTOM][3] = clip.m[15] + clip.m[13]; NormalizePlane(BOTTOM); m_Planes[TOP][0] = clip.m[ 3] - clip.m[ 1]; m_Planes[TOP][1] = clip.m[ 7] - clip.m[ 5]; m_Planes[TOP][2] = clip.m[11] - clip.m[ 9]; m_Planes[TOP][3] = clip.m[15] - clip.m[13]; NormalizePlane(TOP); m_Planes[NEAR][0] = clip.m[ 3] + clip.m[ 2]; m_Planes[NEAR][1] = clip.m[ 7] + clip.m[ 6]; m_Planes[NEAR][2] = clip.m[11] + clip.m[10]; m_Planes[NEAR][3] = clip.m[15] + clip.m[14]; NormalizePlane(NEAR); m_Planes[FAR][0] = clip.m[ 3] - clip.m[ 2]; m_Planes[FAR][1] = clip.m[ 7] - clip.m[ 6]; m_Planes[FAR][2] = clip.m[11] - clip.m[10]; m_Planes[FAR][3] = clip.m[15] - clip.m[14]; NormalizePlane(FAR); } void NormalizePlane(int side) { float length = 1.0/(float)sqrt(m_Planes[side][0] * m_Planes[side][0] + m_Planes[side][1] * m_Planes[side][1] + m_Planes[side][2] * m_Planes[side][2]); m_Planes[side][0] *= length; m_Planes[side][1] *= length; m_Planes[side][2] *= length; m_Planes[side][3] *= length; } And check against it with: bool PointInFrustum(float x, float y, float z) { for(int i = 0; i < 6; i++) { if( m_Planes[i][0] * x + m_Planes[i][1] * y + m_Planes[i][2] * z + m_Planes[i][3] <= 0 ) return false; } return true; } Then i render using: camera->Update(); UpdateFrustum(); int numCulled = 0; for(int i = 0; i < (int)meshes.size(); i++) { if(!PointInFrustum(meshCenter.x, meshCenter.y, meshCenter.z)) { meshes[i]->SetDraw(false); numCulled++; } else meshes[i]->SetDraw(true); } Matrices look like (Camera is at (5, 0, 0)): ModelView [0,0,0.99,0] [0,1,0,0] [-0.99,0,0,0] [0,0,-5,1] Projection [0.814,0,0,0] [0,1.303,0,0] [0,0,-1,0] [0,0,-0.02,0] Clip [0,0,-1,-0.999] [0,1.30,0,0] [-0.814,0,0,0] [0,0,4.98,4.99] What am i doing wrong?

    Read the article

  • Does SFML render graphics outside the window?

    - by ThePlan
    While working on a tile-based map I figured it would be a good idea if I would only render what the player sees on the game window, but then it occurred to me that SFML could already be optimized enough to know when it doesn't have to render those things. Let's say I draw a 30x30 squared maps (A medium one) but the player only sees a bunch of them, not entirely. Would SFML automatically hide what the player doesn't see, or should I hide it myself?

    Read the article

  • Optimal sprite size for rotations

    - by Panda Pajama
    I am making a sprite based game, and I have a bunch of images that I get in a ridiculously large resolution and I scale them to the desired sprite size (for example 64x64 pixels) before converting them to a game resource, so when draw my sprite inside the game, I don't have to scale it. However, if I rotate this small sprite inside the game (engine agnostically), some destination pixels will get interpolated, and the sprite will look smudged. This is of course dependent on the rotation angle as well as the interpolation algorithm, but regardless, there is not enough data to correctly sample a specific destination pixel. So there are two solutions I can think of. The first is to use the original huge image, rotate it to the desired angles, and then downscale all the reaulting variations, and put them in an atlas, which has the advantage of being quite simple to implement, but naively consumes twice as much sprite space for each rotation (each rotation must be inscribed in a circle whose diameter is the diagonal of the original sprite's rectangle, whose area is twice of that original rectangle, supposing square sprites). It also has the disadvantage of only having a predefined set of rotations available, which may be okay or not depending on the game. So the other choice would be to store a larger image, and rotate and downscale while rendering, which leads to my question. What is the optimal size for this sprite? Optimal meaning that a larger image will have no effect in the resulting image. This is definitely dependent on the image size, the amount of desired rotations without data loss down to 1/256, which is the minimum representable color difference. I am looking for a theoretical general answer to this problem, because trying a bunch of sizes may be okay, but is far from optimal.

    Read the article

  • How to properly do weapon cool-down reload timer in multi-player laggy environment?

    - by John Murdoch
    I want to handle weapon cool-down timers in a fair and predictable way on both client on server. Situation: Multiple clients connected to server, which is doing hit detection / physics Clients have different latency for their connections to server ranging from 50ms to 500ms. They want to shoot weapons with fairly long reload/cool-down times (assume exactly 10 seconds) It is important that they get to shoot these weapons close to the cool-down time, as if some clients manage to shoot sooner than others (either because they are "early" or the others are "late") they gain a significant advantage. I need to show time remaining for reload on player's screen Clients can have clocks which are flat-out wrong (bad timezones, etc.) What I'm currently doing to deal with latency: Client collects server side state in a history, tagged with server timestamps Client assesses his time difference with server time: behindServerTimeNs = (behindServerTimeNs + (System.nanoTime() - receivedState.getServerTimeNs())) / 2 Client renders all state received from server 200 ms behind from his current time, adjusted by what he believes his time difference with server time is (whether due to wrong clocks, or lag). If he has server states on both sides of that calculated time, he (mostly LERP) interpolates between them, if not then he (LERP) extrapolates. No other client-side prediction of movement, e.g., to make his vehicle seem more responsive is done so far, but maybe will be added later So how do I properly add weapon reload timers? My first idea would be for the server to send each player the time when his reload will be done with each world state update, the client then adjusts it for the clock difference and thus can estimate when the reload will be finished in client-time (perhaps considering also for latency that the shoot message from client to server will take as well?), and if the user mashes the "shoot" button after (or perhaps even slightly before?) that time, send the shoot event. The server would get the shoot event and consider the time shot was made as the server time when it was received. It would then discard it if it is nowhere near reload time, execute it immediately if it is past reload time, and hold it for a few physics cycles until reload is done in case if it was received a bit early. It does all seem a bit convoluted, and I'm wondering whether it will work (e.g., whether it won't be the case that players with lower ping get better reload rates), and whether there are more elegant solutions to this problem.

    Read the article

  • Drawing large 2D sidescroller level terrain

    - by Yar
    I'm a relatively good programmer but now that it comes to add some basic levels to my 2D game I'm kinda stuck. What I want to do: An acceptable, large (8000 * 1000 pixels) "green hills" test level for my game. What is the best way for me to do this? It doesn't have to look great, it just shouldn't look like it was made in MS paint with the line and paint bucket tool. Basically it should just mud with grass on top of it, shaped in some form of hills. But how should I draw it, I can't just take out the pencil tool and start drawing it pixel per pixel, can I?

    Read the article

  • Any ideas on reducing lag in terrain generation?

    - by l5p4ngl312
    Ok so here's the deal. I've written an isometric engine that generates terrain based on camera values using 2D perlin noise. I planned on doing 3D but first I need to work out the lag issues I'm having. I will try to explain how I am doing this so that maybe someone can spot where I am going wrong. I know it should not be this laggy. There is the abstract class Block which right now just contains render(). BlockGrass, etc. extend this class and each has code in the render function to create a textured quad at the given position. Then there is the class Chunk which has the function Generate() and setBlocksInArea(). Generate uses 2D perlin noise to make a height map and stores the heights in a 2D array. It stores the positions of each block it generates in blockarray[x][y][z]. The chunks are 8x8x128. In the main game class there is a 3D array called blocksInArea. The blocks in this array are what gets rendered. When a chunk generates, it adds its blocks to this array at the correct index. It is like this so chunks can be saved to the hard drive (even though they aren't yet) but there can still be optimization with the rendering that you wouldn't have if you rendered each chunk separately. Here's where the laggy part comes in: When the camera moves to a new chunk, a row of chunks generates on the end of the axis that the camera moved on. But it still has to move the other chunks up/down in the blocksInArea (render) array. It does this by calculating the new position in the array and doing the Chunk.setBlocksInArea(): for(int x = 0; x < 8; x++){ for(int y = 0; y < 8; y++){ nx = x+(coordX - camCoordX)*8 ny = y+(coordY - camCoordY)*8 for(int z = 0; z < height[x][y]; z++){ blockarray[x][y][z] = Game.blocksInArea[nx][ny][z]; } } } My reasoning was that this would be much faster than doing the perlin noise all over again, but there are still little spikes of lag when you move in between chunks. Edit: Would it be possible to create a 3 dimensional array list so that shifting of chunks within the array would not be neccessary?

    Read the article

  • Minecraft style XNA game collision?

    - by Levi
    I've been trying to get this working for ages now, I can detect if there's a solid block at any place on the map and I can check how far something is inside of it, but I don't understand how to fix the collision. I've tried loads of ways and all of them end up by the player getting stuck, glitching around, incorrect responses and I really have no idea how to go about this :/. int Chnk = Utility.GetChunkFromPosition(origin); if (Chnk == -1) return; Vector3 Pos = Utility.GetCubeVectorFromPosition(origin); if (GlobalWorld.LoadedChunks[Chnk].Blocks[(byte)Pos.X, (byte)Pos.Y, (byte)Pos.Z] != 0) { isInIllegalState = true; if (velocity.Y < 0f) velocity.Y = 0f; } while (isInIllegalState) { if (GlobalWorld.LoadedChunks[Chnk].Blocks[(byte)Pos.X, (byte)origin.Y, (byte)Pos.Z] != 0) origin.Y = (int)(origin.Y + 1); else isInIllegalState = false; } if (origin.Y < Chunk.YSize - 2 && GlobalWorld.LoadedChunks[Chnk].Blocks[(byte)Pos.X, (byte)(origin.Y + playerHeight.Y), (byte)Pos.Z] != 0) { velocity.Y = 0f; //Acceleration.Y = 0f; origin.Y = (int)origin.Y;// -0.5f; } for (int x = -1; x <= 1; x+=2) { for (int z = -1; z <= 1; z += 2) { Vector3 CornerPosition = new Vector3(boundingSize * x, 0, boundingSize * z); bool CorrectX = false; bool CorrectZ = false; Vector3 RoundedOrigin = Utility.RoundVector(origin); Vector3 RoundedCorner = Utility.RoundVector(origin + CornerPosition); byte BlockAdjacent = Utility.GetCubeFromPosition(origin + CornerPosition); if (BlockAdjacent == 0) continue; if (RoundedCorner.X != RoundedOrigin.X && RoundedCorner.Z != RoundedOrigin.Z) { CorrectX = true; CorrectZ = true; } if (RoundedCorner.Z != RoundedOrigin.Z && RoundedCorner.X == RoundedOrigin.X) CorrectZ = true; if (RoundedCorner.X != RoundedOrigin.X && RoundedCorner.Z == RoundedOrigin.Z) CorrectX = true; if (CorrectX && CornerPosition.X > 0) { if (origin.X > 0f) origin.X = (int)(origin.X + 1) - boundingSize; else origin.X = (int)origin.X - boundingSize; } else if (CorrectX && CornerPosition.X < 0) { if (origin.X > 0f) origin.X = (int)(origin.X) + boundingSize; else origin.X = (int)(origin.X - 1) + boundingSize; } if (CorrectZ && CornerPosition.Z > 0) { if (origin.Z > 0f) origin.Z = (int)(origin.Z + 1) - boundingSize; else origin.Z = (int)origin.Z - boundingSize; } else if (CorrectZ && CornerPosition.Z < 0) { if (origin.Z > 0f) origin.Z = (int)(origin.Z) + boundingSize; else origin.Z = (int)(origin.Z - 1) + boundingSize; } } }

    Read the article

  • Algorithm to shoot at a target in a 3d game

    - by Sebastian Bugiu
    For those of you remembering Descent Freespace it had a nice feature to help you aim at the enemy when shooting non-homing missiles or lasers: it showed a crosshair in front of the ship you chased telling you where to shoot in order to hit the moving target. I tried using the answer from http://stackoverflow.com/questions/4107403/ai-algorithm-to-shoot-at-a-target-in-a-2d-game?lq=1 but it's for 2D so I tried adapting it. I first decomposed the calculation to solve the intersection point for XoZ plane and saved the x and z coordinates and then solving the intersection point for XoY plane and adding the y coordinate to a final xyz that I then transformed to clipspace and put a texture at those coordinates. But of course it doesn't work as it should or else I wouldn't have posted the question. From what I notice the after finding x in XoZ plane and the in XoY the x is not the same so something must be wrong. float a = ENG_Math.sqr(targetVelocity.x) + ENG_Math.sqr(targetVelocity.y) - ENG_Math.sqr(projectileSpeed); float b = 2.0f * (targetVelocity.x * targetPos.x + targetVelocity.y * targetPos.y); float c = ENG_Math.sqr(targetPos.x) + ENG_Math.sqr(targetPos.y); ENG_Math.solveQuadraticEquation(a, b, c, collisionTime); First time targetVelocity.y is actually targetVelocity.z (the same for targetPos) and the second time it's actually targetVelocity.y. The final position after XoZ is crossPosition.set(minTime * finalEntityVelocity.x + finalTargetPos4D.x, 0.0f, minTime * finalEntityVelocity.z + finalTargetPos4D.z); and after XoY crossPosition.y = minTime * finalEntityVelocity.y + finalTargetPos4D.y; Is my approach of separating into 2 planes and calculating any good? Or for 3D there is a whole different approach? sqr() is square not sqrt - avoiding a confusion.

    Read the article

  • OpenGL textures trigger error 1281 if SFML is not called

    - by user3714670
    I am using SOIL to apply textures to VBOs, without textures i could change the background and display black (default color) vbos easily, but now with textures, openGL is giving an error 1281, the background is black and some textures are not applied. but the first texture IS applied (nothing else is working though). The strange thing is : if i create a dummy texture with SFML in the same program, all other textures do work. So i guess there is something i forgot in the texture creation/application, if someone could enlighten me. Here is the code i use to load textures, once loaded it is kept in memory, it mostly comes from the example of SOIL : texture = SOIL_load_OGL_single_cubemap( filename, SOIL_DDS_CUBEMAP_FACE_ORDER, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); if( texture > 0 ) { glEnable( GL_TEXTURE_CUBE_MAP ); glEnable( GL_TEXTURE_GEN_S ); glEnable( GL_TEXTURE_GEN_T ); glEnable( GL_TEXTURE_GEN_R ); glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glBindTexture( GL_TEXTURE_CUBE_MAP, texture ); std::cout << "the loaded single cube map ID was " << texture << std::endl; } else { std::cout << "Attempting to load as a HDR texture" << std::endl; texture = SOIL_load_OGL_HDR_texture( filename, SOIL_HDR_RGBdivA2, 0, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS ); if( texture < 1 ) { std::cout << "Attempting to load as a simple 2D texture" << std::endl; texture = SOIL_load_OGL_texture( filename, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); } if( texture > 0 ) { // enable texturing glEnable( GL_TEXTURE_2D ); // bind an OpenGL texture ID glBindTexture( GL_TEXTURE_2D, texture ); std::cout << "the loaded texture ID was " << texture << std::endl; } else { glDisable( GL_TEXTURE_2D ); std::cout << "Texture loading failed: '" << SOIL_last_result() << "'" << std::endl; } } and how i apply it when drawing : GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; // glEnableVertexAttribArray(TextureID); glActiveTexture(GL_TEXTURE0); if(SFML) sf::Texture::bind(sfml_texture); else { glBindTexture (GL_TEXTURE_2D, texture); // glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1024, 768, 0, GL_RGB, GL_UNSIGNED_BYTE, &texture); } glUniform1i(TextureID, 0); I am not sure that SOIL is adapted to my program as i want something as simple as possible (i used sfml's texture object which was the best but i can't anymore), but if i can get it to work it would be great.

    Read the article

  • Procedural content (settlement) generation

    - by instancedName
    I have, lets say, something like a homework or assignment to do. Roughly said I need to write an algorithm (pseudo code is not necessary, just in depth description) of procedure that would generate settlements, environment and a people to populate it with, as part of some larger world generation procedure. The genre of game is not specified, it could be any genre (rpg, strategy, colony simulation etc.) where interacting with large and extensive world is central to the game. Procedure should be called once per settlement. At the time of calling, world generation procedure makes geography, culture and history input available. Output should be map of the village and it's immediate area, and various potential additional information like myths, history, demographic facts etc. Bonus would be quest ant similar stuff, but that not really my focus at the moment. I will leave quality of the output for later when I actually dig little deeper into this topic. I am free to change parameters as long as I have strong explanation for doing so. Setting of the game is undetermined so I am free to use anything that I like the most. Ok, so my actual question is: Can anyone who has some experience in this field of game design recommend me some good literature, or point me in the direction where I should look/reed/study? I'm somewhat experienced game programmer, but I've never been into game design till now so any help will be great. I want to do this assignment as good as I can. As for deadline, it's not strictly set, but lets say I don't want it to take longer then few weeks, one month at worst case.

    Read the article

  • OpenGL 2D Rasterization Sub-Pixel Translations

    - by Armin Ronacher
    I have a tile based 2D engine where the projection matrix is an orthographic view of the world without any scaling applied. Thus: one pixel texture is drawn on the screen in the same size. That all works well and looks nice but if the camera makes a sub-pixel movement small lines appear between the tiles. I can tell you in advance what does not fix the problem: GL_NEAREST texture interpolation GL_CLAMP_TO_EDGE What does “fix” the problem is anchoring the camera to the nearest pixel instead of doing a sub-pixel translation. I can live with that, but the camera movement becomes jerky. Any ideas how to fix that problem without resorting to the rounding trick I do currently?

    Read the article

  • Dynamically load images inside jar

    - by Rahat Ahmed
    I'm using Slick2d for a game, and while it runs fine in Eclipse, i'm trying to figure out how to make it work when exported to a runnable .jar. I have it set up to where I load every image located in the res/ directory. Here's the code /** * Loads all .png images located in source folders. * @throws SlickException */ public static void init() throws SlickException { loadedImages = new HashMap<>(); try { URI uri = new URI(ResourceLoader.getResource("res").toString()); File[] files = new File(uri).listFiles(new FilenameFilter(){ @Override public boolean accept(File dir, String name) { if(name.endsWith(".png")) return true; return false; } }); System.out.println("Naming filenames now."); for(File f:files) { System.out.println(f.getName()); FileInputStream fis = new FileInputStream(f); Image image = new Image(fis, f.getName(), false); loadedImages.put(f.getName(), image); } } catch (URISyntaxException | FileNotFoundException e) { System.err.println("UNABLE TO LOAD IMAGES FROM RES FOLDER!"); e.printStackTrace(); } font = new AngelCodeFont("res/bitmapfont.fnt",Art.get("bitmapfont.png")); } Now the obvious problem is the line URI uri = new URI(ResourceLoader.getResource("res").toString()); If I pack the res folder into the .jar there will not be a res folder on the filesystem. How can I iterate through all the images in the compiled .jar itself, or what is a better system to automatically load all images?

    Read the article

  • Stylecop 4.5.20.0 is available

    - by TATWORTH
    Stylecop 4.5.20.0 is available is available at http://stylecop.codeplex.com/releases/view/62209 This is the StyleCop 4.5 RC8. "This release includes the very latest StyleCop for ReSharper plugin and will automatically uninstall previous versions of StyleCop. This updated release contains around 200 bug fixes since the 4.4 RTW release and includes 5 new rules. Support for the async CTP is also added. SA1125 - UseShorthandForNullableTypes SA1411 - AttributeConstructorMustNotUseUnnecessaryParenthesis SA1517 - CodeMustNotContainBlankLinesAtStartOfFile SA1518 - CodeMustNotContainBlankLinesAtEndOfFile SA1649 - FileHeaderFileNameDocumentationMustMatchTypeName" StyleCop / Resharper 5.0 integration continues to improve. If you have not yet used them in your C# development, I urge you to try them out. I have found StyleCop 4.5 RC8 (and its RC predecessors) to be stable. Making changes to the code to make it style cop compliant is now very much easier. Can't code withoutThe best C# & VB.NET refactoring plugin for Visual Studio

    Read the article

  • JavaScript 3D space ship rotation

    - by user36202
    I am working with a fairly low-level JavaScript 3D API (not Three.js) which uses euler angles for rotation. In most cases, euler angles work quite well for doing things like aligning buildings, operating a hovercraft, or looking around in the first-person. However, in space there is no up or down. I want to control the ship's roll, pitch, and yaw. To do that, some people would use a local coordinate system or a permenant matrix or quaternion or whatever to represent the ship's angle. However, since I am not most people and am using a library that deals exclusively in euler angles, I will be using relative angles to represent how to rotate the ship in space and getting the resulting non-relative euler angles. For you math nerds, that means I need to convert 3 euler angles (with Y being the vertical axis, X representing the pitch, and Z representing a roll which is unaffected by the other angles, left-handed system) into a 3x3 orientation matrix, do something fancy with the matrix, and convert it back into the 3 euler angles. Euler to matrix to euler. Somebody has posted something similar to this on SO (http://stackoverflow.com/questions/1217775/rotating-a-spaceship-model-for-a-space-simulator-game) but he ended up just working with a matrix. This will not do for me. Also, I am using JavaScript, not C++. What I want essentially are the functions do_roll, do_pitch, and do_yaw which only take in and put out euler angles. Many thanks.

    Read the article

  • using heightmap to simulate 3d in an isometric 2d game

    - by VaTTeRGeR
    I saw a video of an 2.5d engine that used heightmaps to do zbuffering. Is this hard to do? I have more or less no idea of Opengl(lwjgl) and that stuff. I could imagine, that you compare each pixel and its depthmap to the depthmap of the already drawn background to determine if it gets drawn or not. Are there any tutorials on how to do this, is this a common problem? It would already be awesome if somebody knows the names of the Opengl commands so that i can go through some general tutorials on that. greets! Great 2.5d engine with the needed effect, pls go to the last 30 seconds Edit, just realised, that my question wasn't quite clear expressed: How can i tell Opengl to compare the existing depthbuffer with an grayscale texure, to determine if a pixel should get drawn or not?

    Read the article

  • GUI for DirectX

    - by DeadMG
    I'm looking for a GUI library built on top of DirectX- preferably 9, but I can also do 11. I've looked at stuff like DXUT, but it's way too much for me- I'm only needing some UI controls which I would rather not write (and debug) myself, and their need to keep a C-compatible API is definitely a big downside. I'd rather look at UI libs that are designed to be integrated into an existing DirectX-based system, rather than forming the basis of a system. Any recommendations?

    Read the article

  • Checking for alternate keys with XNA IsKeyDown

    - by jocull
    I'm working on picking up XNA and this was a confusing point for me. KeyboardState keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Keys.Left) || keyState.IsKeyDown(Keys.A)) { //Do stuff... } The book I'm using (Learning XNA 4.0, O'Rielly) says that this method accepts a bitwise OR series of keys, which I think should look like this... KeyboardState keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Keys.Left | Keys.A)) { //Do stuff... } But I can't get it work. I also tried using !IsKeyUp(... | ...) as it said that all keys had to be down for it to be true, but had no luck with that either. Ideas? Thanks.

    Read the article

  • How to deal with Character body parts from Design to Cocos2d

    - by Edwin Soho
    I'm trying to figure out the pattern the game developers use together with game designers: See the picture below with the independent parts: Questions: 1) Should I create different image parts from different body parts or keep frame by frame animaton? (I know both can be used, but I'm trying to figure what is commonly used in the industry) 2) If I'm going to generate different image parts from different body parts (which is I thing is more logical) how would I export that to Cocos2d (Vector or Bitmap)?

    Read the article

  • How to scroll hex tiles?

    - by Chris Evans
    I don't seem to be able to find an answer to this one. I have a map of hex tiles. I wish to implement scrolling. Code at present: drawTilemap = function() { actualX = Math.floor(viewportX / hexWidth); actualY = Math.floor(viewportY / hexHeight); offsetX = -(viewportX - (actualX * hexWidth)); offsetY = -(viewportY - (actualY * hexHeight)); for(i = 0; i < (10); i++) { for(j = 0; j < 10; j++) { if(i % 2 == 0) { x = (hexOffsetX * i) + offsetX; y = j * sourceHeight; } else { x = (hexOffsetX * i) + offsetX; y = hexOffsetY + (j * sourceHeight); } var tileselected = mapone[actualX + i][j]; drawTile(x, y, tileselected); } } } The code I've written so far only handles X movement. It doesn't yet work the way it should do. If you look at my example on jsfiddle.net below you will see that when moving to the right, when you get to the next hex tile along, there is a problem with the X position and calculations that have taken place. It seems it is a simple bit of maths that is missing. Unfortunately I've been unable to find an example that includes scrolling yet. http://jsfiddle.net/hd87E/1/ Make sure there is no horizontal scroll bar then trying moving right using the - right arrow on the keyboard. You will see the problem as you reach the end of the first tile. Apologies for the horrid code, I'm learning! Cheers

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >