Search Results

Search found 44141 results on 1766 pages for 'unix development support'.

Page 557/1766 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • Game actions that take multiple frames to complete

    - by CantTetris
    I've never really done much game programming before, pretty straightforward question. Imagine I'm building a Tetris game, with the main loop looking something like this. for every frame handle input if it's time to make the current block move down a row if we can move the block move the block else remove all complete rows move rows down so there are no gaps if we can spawn a new block spawn a new current block else game over Everything in the game so far happens instantly - things are spawned instantly, rows are removed instantly etc. But what if I don't want things to happen instantly (i.e animate things)? for every frame handle input if it's time to make the current block move down a row if we can move the block move the block else ?? animate complete rows disappearing (somehow, wait over multiple frames until the animation is done) ?? animate rows moving downwards (and again, wait over multiple frames) if we can spawn a new block spawn a new current block else game over In my Pong clone this wasn't an issue, as every frame I was just moving the ball and checking for collisions. How can I wrap my head around this issue? Surely most games involves some action that takes more than a frame, and other things halt until the action is done.

    Read the article

  • Why does my terrain turn white when I get close to it?

    - by Starkers
    When I zoom in on my terrain it goes white: The further in I zoom, the greater the whiteness becomes. Is this normal? Is this to speed up rendering or something? Can I turn it off? I'm also getting these error messages in the console over and over again: rc.right != m_GfxWindow-GetWidth() || rc.bottom != m_GfxWindow-GetHeight() and GUI Window tries to begin rendering while something else has not finished rendering! Either you have a recursive OnGUI rendering, or previous OnGUI did not clean up properly. Does this bear any correlation on the issue? Update I create virtual desktops to flit between using the program Deskpot. Turning this program off and restarting has stopped the above errors appearing in the console. However, I still get white terrain when I zoom in. Not a single error message. I've restarted my computer to no avail. I have an Asus NVidia GeForce GTX 760 2GB DDR5 Direct CU II OC Edition Graphics Card. Any known issues? Update I don't think it's fog...

    Read the article

  • Can't click on a button with startDrag() active on stage

    - by Pedro
    I need to know how can I enable mouse click on a button when I have a MouseEvent listener for the stage. I have a MClip associated with the mouse cursor: Mouse.hide(); scope.startDrag(true); And an MouseEnvet on the stage: stage.addEventListener(MouseEvent.CLICK, FunctionXYZ); When I try to click on any button they don't assume the function that I create for those buttons... for example, button for fullscreen, exit, help, etc... Thank you very much. BR, Pedro

    Read the article

  • Frustum culling with third person camera

    - by Christian Frantz
    I have a third person camera that contains two matrices: view and projection, and two Vector3's: camPosition and camTarget. I've read up on frustum culling and it makes it seem easy enough for a first person camera, but how would I implement this for a third person camera? I need to take into effect the objects I can see behind me too. How would I implement this into my camera class so it runs at the same time as my update method? public void CameraUpdate(Matrix objectToFollow) { camPosition = objectToFollow.Translation + (objectToFollow.Backward *backward) + (objectToFollow.Up * up); camTarget = objectToFollow.Translation; view = Matrix.CreateLookAt(camPosition, camTarget, Vector3.Up); } Can I just create another method within the class which creates a bounding sphere with a value from my camera and then uses the culling based on that? And if so, which value am I using to create the bounding sphere from? After this is implemented, I'm planning on using occlusion culling for the faces of my objects adjacent to other objects. Will using just one or the other make a difference? Or will both of them be better? I'm trying to keep my framerate as high as possible

    Read the article

  • Packing jar files into library jar files

    - by Hillel
    Firstly, this question is not about packing a simple jar file (e.g. lwjgl) into a runnable jar file. I know how to do this using JarSplice. So if I have a game which uses JInput, I will pack my game jar and jinput.jar using JarSplice and enter the natives in the process. The problem arises when I want to create a custom library that uses JInput, and then pack that into my games. See, the whole idea of writing a game library is that I don't ever have to even copy code like the wrapper I wrote for JInput Controller, and I always have a definitive version inside a library jar. Basically what I wanna do is create a jar file of my library, pack jinput.jar into it using JarSplice, possibly with the natives as well, and then when I want to export a jar of my game, I either export it automatically through Eclipse with the library jar, or, if that doesn't work, use JarSplice. I've tried several solutions, and nothing works. When I try to pack the game jar and the library jar using JarSplice, I get an error saying that there's either duplicate .project or .classpath. When I try to export my game through Eclipse with the library jar, it won't run (which is to be expected), but then, if I try to attach the natives with JarSplice, it doesn't give me any errors but the jar doesn't run. I'm not expecting anyone to solve this, but if anyone has an idea, something that will allow me to never look at the Gamepad code ever again, that would be awesome. I don't care if I have to package my library jar using JarSplice 5 times, and then do the same with the game jar, as long as it works. Otherwise I'll just have to copy the Gamepad class into every project alongside the library jar. :(

    Read the article

  • AABB > AABB collision response?

    - by Levi
    I'm really confused about how to fix this in 3d? I want it so that I can slide along cubes but without getting caught if there's 2 adjacent cubes. I've gotten it so that I can do x collision, with sliding, and y, and z, but I can't do them together, probably because I don't know how to resolve it correctly. e.g. [] [] []^ []O [] O is the player, ^ is the direction the player is moving, with the methods which I was trying I would get stuck between the cubes because the z axis was responding and kicking me out :/. I don't know how to resolve this in all 3 direction, like how would I go about telling which direction I have to resolve in. My previous methods involved me checking 4 points in a axis aligned square around the player, I was checking if these points where inside the cubes and if they where fixing my position, but I couldn't get it working correctly. Help is appreciated. edit: pretend all the blocks are touching.

    Read the article

  • Using polygons instead of quads on Cocos2d

    - by rraallvv
    I've been looking under the hood of Cocos2d, and I think (please correct me if I'm wrong) that although working with quads is a key feature of the engine, it should't be dificult to make it work with arrays of vertices (aka polygons) instead of quads, being the quads a special case of an array of four vertices by the way, does anyone have any code that makes cocos2d render a texture filled polygon inside a batch node? the code posted here (http://www.cocos2d-iphone.org/forum/topic/8142/page/2#post-89393) does a nice job rendering a texture filled polygon but the class doesn't work with batch nodes

    Read the article

  • Using raw vertex information for sprites rather than SpriteBatch in XNA

    - by The Communist Duck
    I have been wondering whether using SpriteBatch is the best option. Obviously for prototyping or small games it works well. However, I've been wanting to apply techniques such as shaders and lighting to my game. I know you can use shaders to some extent with SpriteSortMode.Immediate, but I'm not sure if you lose power using that. The other major thing is that you cannot store your vertex data in the graphics memory with buffers. In summary, is there an advantage of using VertexTextureNormal (or whatever they're called) structs for vertex data for 2D sprites, or should I stick with SpriteBatch, provided I wish to use shaders?

    Read the article

  • Converting openGl code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run in to some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it rigth before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has diffrent coordinations systems, and functions that alters how you must implement to code abit. This is his code example: public Ray GetPickRay() { int mouseX = Mouse.getX(); int mouseY = WORLD.Byte56Game.getHeight() - Mouse.getY(); float windowWidth = WORLD.Byte56Game.getWidth(); float windowHeight = WORLD.Byte56Game.getHeight(); //get the mouse position in screenSpace coords double screenSpaceX = ((float) mouseX / (windowWidth / 2) - 1.0f) * aspectRatio; double screenSpaceY = (1.0f - (float) mouseY / (windowHeight / 2)); double viewRatio = Math.tan(((float) Math.PI / (180.f/ViewAngle) / 2.00f))* zoomFactor; screenSpaceX = screenSpaceX * viewRatio; screenSpaceY = screenSpaceY * viewRatio; //Find the far and near camera spaces Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * NearPlane), (float) (screenSpaceY * NearPlane), (float) (-NearPlane), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * FarPlane), (float) (screenSpaceY * FarPlane), (float) (-FarPlane), 1); //Unproject the 2D window into 3D to see where in 3D we're actually clicking Matrix4f tmpView = Matrix4f(view); Matrix4f invView = (Matrix4f) tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invView, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invView, cameraSpaceFar, worldSpaceFar); //calculate the ray position and direction Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); return new Ray(rayPosition, rayDirection); } All rigths reserved to him of course This is my DirectX 11 code : void GraphicEngine::pickRayVector(float mouseX, float mouseY,XMVECTOR& pickRayInWorldSpacePos, XMVECTOR& pickRayInWorldSpaceDir) { float PRVecX, PRVecY; float nearPlane = 0.1f; float farPlane = 200.0f; floar viewAngle = 0.4 * 3.14; PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); XMVECTOR cameraSpaceNear = XMVectorSet(PRVecX * nearPlane,PRVecY * nearPlane, -nearPlane, 1.0f); XMVECTOR cameraSpaceFar = XMVectorSet(PRVecX * farPlane,PRVecY * farPlane, -farPlane, 1.0f); // Transform 3D Ray from View space to 3D ray in World space XMMATRIX invMat; XMVECTOR matInvDeter; invMat = XMMatrixInverse(&matInvDeter, cam->getCameraView()); //Inverse of View Space matrix is World space matrix XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); pickRayInWorldSpacePos = worldSpaceNear; pickRayInWorldSpaceDir = worldSpaceFar-worldSpaceNear; pickRayInWorldSpaceDir = XMVector3Normalize(pickRayInWorldSpaceDir); } A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom rigth (800,600) ( or whatever resolution you would have) I hadn't used any far or near plane before, so i just made some arbitrary number up for them. To my understanding it shouldnt matter as long as the object you are trying to pick is in between the range of thoese numbers The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH , I just hadn't made it a member variable of my Camera class yet. I removed the variable aspectRation and zoomFactor because I assumed that they where related to some specific function of his game. Now I'm not sure, but I think the problems lies either withing the mouse to viewspace conversion, maby that we use diffrent coordinations systems. Either that or how i transform the matrixes in the the end, because i know order is important when it comes to matrixes. Any help is appriciated! Thanks in advance. Edit: One more note, my code is in c++

    Read the article

  • Box2D Platform body not moving player body along with it

    - by onedayitwillmake
    I am creating a game using Box2D (Javascript implementation) - and I added the ability to have a static platform, that is moved along an axis as a function of a sine. My problem is when the player lands on the platform, as the platform moves along the X axis - the player is not moved along with it, as you visually would expect. The player can land on the object, and if it hits the side of the object, it does colide with it and is pushed. This image might explain better than I did: After jumping on to the red platform the player character will fall off as the platform moves to the right UPDATE: Here is a live demo showing the problem: http://onedayitwillmake.com/ChuClone/slideexample.php

    Read the article

  • Doing an SNES Mode 7 (affine transform) effect in pygame

    - by 2D_Guy
    Is there such a thing as a short answer on how to do a Mode 7 / mario kart type effect in pygame? I have googled extensively, all the docs I can come up with are dozens of pages in other languages (asm, c) with lots of strange-looking equations and such. Ideally, I would like to find something explained more in English than in mathematical terms. I can use PIL or pygame to manipulate the image/texture, or whatever else is necessary. I would really like to achieve a mode 7 effect in pygame, but I seem close to my wit's end. Help would be greatly appreciated. Any and all resources or explanations you can provide would be fantastic, even if they're not as simple as I'd like them to be. If I can figure it out, I'll write a definitive how to do mode 7 for newbies page. edit: mode 7 doc: http://www.coranac.com/tonc/text/mode7.htm

    Read the article

  • best way to compute vertex normals from a Triangle's list

    - by nkint
    hi i'm a complete newbie in computergraphics so sorry if it's a stupid answer. i'm trying to make a simple 3d engine from scratch, more for educational purpose than for real use. i have a Surface object with inside a Triangle's list. For now i compute normals inside Triangle class, in this way: triangle.computeFaceNormals() { Vec3D u = v1.sub(v3) Vec3D v = v1.sub(v2) Vec3D normal = Vec3D.cross(u,v) normal.normalized() this.n1 = this.n2 = this.n3 = normal } and when building surface: t = new Triangle(v1,v2,v3).computeFaceNormals() surface.addTriangle(t) and i think this is the best way to do that.. isn't it? now.. what about for vertex normals? i've found this simple algorithm: flipcode vertex normal but.. hei this algorithm has.. exponential complexity? (if my memory doesn't fail my computer science background..) (bytheway.. it has 3 nested loops.. i don't think it's the best way to do it..) any suggestion?

    Read the article

  • How do you structure a 2D level format with collisions etc. in Java (Slick 2D)?

    - by liamzebedee
    I am developing a game in Java. 2D Fighter, Kind of like the 2d flash game Raze(http://armorgames.com/play/5395/raze). I currently am using the Slick 2D game library and am researching how to structure my levels. I am currently stuck on the problem of the level format(e.g. file format). How do you structure a 2d level with collisions etc.? Level Notes: Will go up down left right NOTE: New to gamedev

    Read the article

  • What is a simple deformer in which vertices deform linearly with control points?

    - by sebf
    In my project I want to deform a complex mesh, using a simpler 'proxy' mesh. In effect, each vertex of the proxy/collision mesh will be a control point/bone, which should deform the vertices of the main mesh attached to it depending on weight, but where the weight is not dependant on the absolute distance from the control point but rather distance relative to the other affecting control points. The point of this is to preserve complex three dimensional features of the main mesh while using physics implementations which expect something far simpler, low resolution, single surface, etc. Therefore, the vertices must deform linearly with their respective weighted control points (i.e. no falloff fields or all the mesh features will end up collapsed) - as if each vertex was linked to a point on the plane created by the attached control points and deformed with it. I have tried implementing the weight computation algorithm in this paper (page 4) but it is not working as expected and I am wondering if it is really the best way to do what I want. What is the simplest way to 'skin'* an arbitrary mesh, to another arbitrary mesh? *By skin I mean I need an algorithm to determine the best control points for a vertex, and their weights.

    Read the article

  • Unity3D: How To Smoothly Switch From One Camera To Another

    - by www.Sillitoy.com
    The Question is basically self explanatory. I have a scene with many cameras and I'd like to smoothly switch from one to another. I am not looking for a cross fade effect but more to a camera moving and rotating the view in order to reach the next camera point of view and so on. To this end I have tried the following code: firstCamera.transform.position.x = Mathf.Lerp(firstCamera.transform.position.x, nextCamer.transform.position.x,Time.deltaTime*smooth); firstCamera.transform.position.y = Mathf.Lerp(firstCamera.transform.position.y, nextCamera.transform.position.y,Time.deltaTime*smooth); firstCamera.transform.position.z = Mathf.Lerp(firstCamera.transform.position.z, nextCamera.transform.position.z,Time.deltaTime*smooth); firstCamera.transform.rotation.x = Mathf.Lerp(firstCamera.transform.rotation.x, nextCamera.transform.rotation.x,Time.deltaTime*smooth); firstCamera.transform.rotation.z = Mathf.Lerp(firstCamera.transform.rotation.z, nextCamera.transform.rotation.z,Time.deltaTime*smooth); firstCamera.transform.rotation.y = Mathf.Lerp(firstCamera.transform.rotation.y, nextCamera.transform.rotation.y,Time.deltaTime*smooth); But the result is actually not that good.

    Read the article

  • 2D Skeletal Animation Transformations

    - by Brad Zeis
    I have been trying to build a 2D skeletal animation system for a while, and I believe that I'm fairly close to finishing. Currently, I have the following data structures: struct Bone { Bone *parent; int child_count; Bone **children; double x, y; }; struct Vertex { double x, y; int bone_count; Bone **bones; double *weights; }; struct Mesh { int vertex_count; Vertex **vertices; Vertex **tex_coords; } Bone->x and Bone->y are the coordinates of the end point of the Bone. The starting point is given by (bone->parent->x, bone->parent->y) or (0, 0). Each entity in the game has a Mesh, and Mesh->vertices is used as the bounding area for the entity. Mesh->tex_coords are texture coordinates. In the entity's update function, the position of the Bone is used to change the coordinates of the Vertices that are bound to it. Currently what I have is: void Mesh_update(Mesh *mesh) { int i, j; double sx, sy; for (i = 0; i < vertex_count; i++) { if (mesh->vertices[i]->bone_count == 0) { continue; } sx, sy = 0; for (j = 0; j < mesh->vertices[i]->bone_count; j++) { sx += (/* ??? */) * mesh->vertices[i]->weights[j]; sy += (/* ??? */) * mesh->vertices[i]->weights[j]; } mesh->vertices[i]->x = sx; mesh->vertices[i]->y = sy; } } I think I have everything I need, I just don't know how to apply the transformations to the final mesh coordinates. What tranformations do I need here? Or is my approach just completely wrong?

    Read the article

  • How to I get a rotated sprite to move left or right?

    - by rphello101
    Using Java/Slick 2D, I'm using the mouse to rotate a sprite on the screen and the directional keys (in this case, WASD) to move the spite. Forwards and backwards is easy, just position += cos(ang)*speed or position -= cos(ang)*speed. But how do I get the sprite to move left or right? I'm thinking it has something to do with adding 90 degrees to the angle or something. Any ideas? Rotation code: int mX = Mouse.getX(); int mY = HEIGHT - Mouse.getY(); int pX = sprite.x+sprite.image.getWidth()/2; int pY = sprite.y+sprite.image.getHeight()/2; double mAng; if(mX!=pX){ mAng = Math.toDegrees(Math.atan2(mY - pY, mX - pX)); if(mAng==0 && mX<=pX) mAng=180; } else{ if(mY>pY) mAng=90; else mAng=270; } sprite.angle = mAng; sprite.image.setRotation((float) mAng); And the movement code (delta is change in time): Input input = gc.getInput(); Vector2f direction = new Vector2f(); Vector2f velocity = new Vector2f(); direction.x = (float) Math.cos(Math.toRadians(sprite.angle)); direction.y = (float) Math.sin(Math.toRadians(sprite.angle)); if(direction.length()>0) direction = direction.normalise(); //On a separate note, what does this line of code do? velocity.x = (float) (direction.x * sprite.moveSpeed); velocity.y = (float) (direction.y * sprite.moveSpeed); if(input.isKeyDown(sprite.up)){ sprite.x += velocity.x*delta; sprite.y += velocity.y*delta; }if (input.isKeyDown(sprite.down)){ sprite.x -= velocity.x*delta; sprite.y -= velocity.y*delta; }if (input.isKeyDown(sprite.left)){ //??? }if (input.isKeyDown(sprite.right)){ //??? }

    Read the article

  • Implementing game rules in a tactical battle board game

    - by Setzer22
    I'm trying to create a game similar to what one would find in a typical D&D board game combat. For mor examples you could think of games like Advance Wars, Fire Emblem or Disgaea. I should say that I'm using design by component so far, but I can't find a nice way to fit components into the part I want to ask. I'm struggling right now with the "game rules" logic. That is, the code that displays the menu, allows the player to select units, and command them, then tells the unit game objects what to do given the player input. The best way I could thing of handling this was using a big state machine, so everything that could be done in a "turn" is handled by this state machine, and the update code of this state machine does different things depending on the state. This approach, though, leads to a large amount of code (anything not model-related) to go into a big class. Of course I can subdivide this big class into more classes, but it doesn't feel modular and upgradable enough. I'd like to know of better systems to handle this in order to be able to upgrade the game with new rules without having a monstruous if/else chain (or switch / case, for that matter). So, any ideas? I'd also like to ask that if you recommend me a specific design pattern to also provide some kind of example or further explanation and not stick to "Yeah you should use MVC and it'll work".

    Read the article

  • How can I fade something to clear instead of white?

    - by Raven Dreamer
    I've got an XNA game which essentially has "floating combat text": short-lived messages that display for a fraction of a second and then disappear. I've recently added a gradual "fade-away" effect, like so: public void Update() { color.A -= 10; position.X += 3; if (color.A <= 10) isDead = true; } Where color is the Color int the message displays as. This works as expected, however, it fades the messages to white, which is very noticeable on my indigo background. Is there some way to fade it to transparent, rather than white? Lerp-ing towards the background color isn't an option, as there's a possibility there will be something between the text and the background, which would simply be the inverse of the current problem.

    Read the article

  • Algorithmically generating neon layers on pixel grid

    - by user190929
    In an attempt at a screensaver I am making, I am a fan of neo-like graphics, which, of course, look great against a black background. As I understand it, neon, graphically speaking, is essentially a gradient of a color, brightest in the center, and gets darker proceeding outward. Although, more accurate is similar, but separating it into tubes and glow. The tubes are mostly white, while the glow is where most of the color is seen. Well... the tubes could also be a light variant of the color, you could say. The glow is darker. Anyhow, my question is, how could you generate such things given an initial pattern of pixels that would be the tubes? For example, let's say I want to make a neon 'H'. I, via the libraries, can attain the rectangles of pixels which represent it, but I want to make it look neonized. How could I algorithmically achieve such an effect given a base tube shape and base color? EDIT: ok, I mistated that. Got a bit distracted. My purpose for this was similar to a neon effect, but not. Sorry about that. What I am looking for is something like this: Start with a pattern of pixels: [!][!][!][!][!][!][!][!] [!][!][O][!][!][!][!][!] [!][!][O][O][!][!][!][!] [!][!][!][!][O][!][!][!] [!][!][!][!][!][!][!][!] How to I find the U pixels? [!][E][E][E][!][!][!][!] [!][E][O][E][E][!][!][!] [!][E][O][O][E][E][!][!] [!][E][E][E][O][E][!][!] [!][!][!][E][E][E][!][!] Sorry if that looks bad.

    Read the article

  • Drawing multiple Textures as tilemap

    - by DocJones
    I am trying to draw a 2d game map and the objects on the map in a single pass. Here is my OpenGL initialization code // Turn off unnecessary operations glDisable(GL_DEPTH_TEST); glDisable(GL_LIGHTING); glDisable(GL_CULL_FACE); glDisable(GL_STENCIL_TEST); glDisable(GL_DITHER); glEnable(GL_BLEND); glEnable(GL_TEXTURE_2D); // activate pointer to vertex & texture array glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); My drawing code is being called by a NSTimer every 1/60 s. Here is the drawing code of my world object: - (void) draw:(NSRect)rect withTimedDelta:(double)d { GLint *t; glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:@"blocks"]); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); for (int x=0; x<[_map getWidth] ; x++) { for (int y=0; y<[_map getHeight] ; y++) { GLint v[] = { 16*x ,16*y, 16*x+16,16*y, 16*x+16,16*y+16, 16*x ,16*y+16 }; t=[_textureManager getBlockWithNumber:[_map getBlockAtX:x andY:y]]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } } } (_textureManager is a Singelton only loading a texture once!) The object drawing codes is identical (except the nested loops) in terms of OpenGL calls: - (void) drawWithTimedDelta:(double)d { GLint *t; GLint v[] = { 16*xpos ,16*ypos, 16*xpos+16,16*ypos, 16*xpos+16,16*ypos+16, 16*xpos ,16*ypos+16 }; glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:_textureName]); t=[_textureManager getBlockWithNumber:12]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } As soon as my central drawing routine calls the two drawing methods the second call overlays the first one. i would expect the call to world.draw to draw the map and "stamp" the objects upon it. Debugging shows me, that the first call is performed correctly (world is being drawn), but the following call to all objects ONLY draws the objects, the rest of the scene is getting black. I think i need to blend the drawn textures, but i cant seem to figure out how. Any help is appreciated. Thanks PS: Here is the github link to the project. It may not be in sync of my post here, but for some more in-depth analysis it may help.

    Read the article

  • Limiting game loop to exactly 60 tics per second (Android / Java)

    - by user22241
    So I'm having terrible problems with stuttering sprites. My rendering and logic takes less than a game tic (16.6667ms) However, although my game loop runs most of the time at 60 ticks per second, it sometimes goes up to 61 - when this happens, the sprites stutter. Currently, my variables used are: //Game updates per second final int ticksPerSecond = 60; //Amount of time each update should take final int skipTicks = (1000 / ticksPerSecond); This is my current game loop @Override public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub //This method will run continuously //You should call both 'render' and 'update' methods from here //Set curTime initial value if '0' //Set/Re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ SceneManager.getInstance().getCurrentScene().updateLogic(); //Time correction to compensate for the missing .6667ms when using int values nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; //Increase loops loops++; } render(); } I realise that my skipTicks is an int and therefore will come out as 16 rather that 16.6667 However, I tried changing it (and ticksPerSecond) to Longs but got the same problem). I also tried to change the timer used to Nanotime and skiptics to 1000000000/ticksPerSecond, but everything just ran at about 300 ticks per seconds. All I'm attempting to do is to limit my game loop to 60 - what is the best way to guarantee that my game updates never happen at more than 60 times a second? Please note, I do realise that very very old devices might not be able to handle 60 although I really don't expect this to happen - I've tested it on the lowest device I have and it easily achieves 60 tics. So I'm not worried about a device not being able to handle the 60 ticks per second, but rather need to limit it - any help would be appreciated.

    Read the article

  • Camera not staying behind model while moving in circle

    - by ChocoMan
    I have a camera behind a model (3rd Person) and I'm having problems KEEPING it behind the model. When I first start my game, you see the back of the model. If the model moves forward, backward or strafe left or right, the camera moves along accordingly. When the model rotates (stationary), the camera rotates accordingly with the model still pointing at the model's back. So far, so good. The problem comes when the player is BOTH moving and rotating at the same time. Take for example a model moving in a circular pattern like running around a track. As the model moves in this motion, the model rotates slightly more with each complete rotation. Eventually, instead of looking at the model's back, eventually you will see the model in a profile view and before you know it, the model's front is facing the camera. And when you stop moving the model, the model stays in that position. So, as long as my model is stationary and rotating in one place, the camera rotates correctly. But as soon as there is any sort movement while rotating, the model is offset by a mysterious increasing amount. How can I keep the camera maintaining the same view no matter how I move AND rotate at the same time? // Rotates model and pitches camera on its own axis public void modelRotMovement(GamePadState pController) { /* For rotating the model left or right. * Camera maintains distance from model * throughout rotation and if model moves * to a new position. */ Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(speedAngleMAX); AddRotation = Quaternion.CreateFromAxisAngle(Vector3.Up, yaw); //AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(speedAngleMAX); AddPitch = Quaternion.CreateFromAxisAngle(Vector3.Up, pitch); ModelLoad.CRotation *= AddPitch; COrientation = Matrix.CreateFromQuaternion(ModelLoad.CRotation); } // Orbit (yaw) Camera around model public void cameraYaw(float yaw) { Vector3 yawAngle = ModelLoad.CameraPos - ModelLoad.camTarget; Vector3 axisYaw = Vector3.Up; ModelLoad.CameraPos = Vector3.Transform(yawAngle, Matrix.CreateFromAxisAngle(axisYaw, yaw)) + ModelLoad.camTarget; }

    Read the article

  • What are the benefits of designing a KeyBinding relay?

    - by Adam Naylor
    The input system of Quake3 is handled using a Keybinding relay, whereby each keypress is matched against a 'binding' which is then passed to the CLI along with a time stamp of when the keypress (or release) occurred. I just wanted to get an idea from developers what they considered to be the key benefits of designing your input system around this approach? One thing i don't particularly like is the appending of the timestamp to the bound command. This seems like a bit of a hack to bend the CLI into handling the games input? Also I feel that detecting the keypress only to add the command to a stream of text that gets parsed at a later date to be a slightly latent way of responding to input? (or is this unfounded?) The only real benefit i can see is that it allows you to bind 'complex' commands to keypresses; like 'switch weapon;+fire;' for example. Or maybe for journaling purposes? Thanks for any insights!

    Read the article

  • Good practices while working with multiple game engines, porting a game to a new engine

    - by Mahbubur R Aaman
    I have to work with multiple game engines, like Cocos2d Unity3d Galaxy While working with multiple game engines, what practices should i follow? EDIT: Is there any guideline to follow, that would be better as while any one working with multiple game engines? EDIT: While a game made by Cocos2d and done well at AppStore, then our target it to port to other platforms, then we utilize Unity3D. Here what should we do?

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >