Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 460/1022 | < Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >

  • Architecture of an action multiplayer game from scratch

    - by lcf
    Not sure whether it's a good place to ask (do point me to a better one if it's not), but since what we're developing is a game - here it goes. So this is a "real-time" action multiplayer game. I have familiarized myself with concepts like lag compensation, view interpolation, input prediction and pretty much everything that I need for this. I have also prepared a set of prototypes to confirm that I understood everything correctly. My question is about the situation when game engine must be rewind to the past to find out whether there was a "hit" (sometimes it may involve the whole 'recomputation' of the world from that moment in the past up to the present moment. I already have a piece of code that does it, but it's not as neat as I need it to be. The domain logic of the app (the physics of the game) must be separated from the presentation (render) and infrastructure tools (e.g. the remote server interaction specifics). How do I organize all this? :) Is there any worthy implementation with open sources I can take a look at? What I'm thinking is something like this: -> Render / User Input -> Game Engine (this is the so called service layer) -> Processing User Commands & Remote Server -> Domain (Physics) How would you add into this scheme the concept of "ticks" or "interactions" with the possibility to rewind and recalculate "the game"? Remember, I cannot change the Domain/Physics but only the Game Engine. Should I store an array of "World's States"? Should they be just some representations of the world, optimized for this purpose somehow (how?) or should they be actual instances of the world (i.e. including behavior and all that). Has anybody had similar experience? (never worked on a game before if that matters)

    Read the article

  • How do I keep a 3D model on the screen in OpenGL?

    - by NoobScratcher
    I'm trying to keep a 3D model on the screen by placing my glDrawElement functions inside the draw function with the declarations at the top of .cpp. When I render the model, the model attaches it self to the current vertex buffer object. This is because my whole graphical user interface is in 2D quads except the window frame. Is there a way to avoid this from happening? or any common causes of this? Creating the file object: int index = IndexAssigner(1, 1); //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); //Make another string //string line; points.push_back(Point()); Point p; int face[4]; Model rendering code: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); cout << "Size Of Point" << sizeof(Point) << endl; GLuint vertexbuffer; glGenVertexArrays(1, &vao[3]); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size()*sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, num_bytes, &points[0]); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, points.size(), &points[0]); glEnableClientState(GL_INDEX_ARRAY); glIndexPointer(GL_FLOAT, faces.size(), faces.data()); glEnableVertexAttribArray(0); glDrawElements(GL_QUADS, points.size(), GL_UNSIGNED_INT, points.data()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data());

    Read the article

  • SRV from UAV on the same texture in directx

    - by notabene
    I'm programming gpgpu raymarching (volumetric raytracing) in directx11. I succesfully perform compute shader and save raymarched volume data to texture. Then i want to use same texture as SRV in normal graphic pipeline. But it doesnt work, texture is not visible. Texture is ok, when i save it file it is what i expect. Texture rendering is ok too, when i render another SRV, it is ok. So problem is only in UAV-SRV. I also triple checked if pointers are ok. Please help, i'm getting mad about this. Here is some code: //before dispatch D3D11_TEXTURE2D_DESC textureDesc; ZeroMemory( &textureDesc, sizeof( textureDesc ) ); textureDesc.Width = xr; textureDesc.Height = yr; textureDesc.MipLevels = 1; textureDesc.ArraySize = 1; textureDesc.SampleDesc.Count = 1; textureDesc.SampleDesc.Quality = 0; textureDesc.Usage = D3D11_USAGE_DEFAULT; textureDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE ; textureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; D3D->CreateTexture2D( &textureDesc, NULL, &pTexture ); D3D11_UNORDERED_ACCESS_VIEW_DESC viewDescUAV; ZeroMemory( &viewDescUAV, sizeof( viewDescUAV ) ); viewDescUAV.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; viewDescUAV.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; viewDescUAV.Texture2D.MipSlice = 0; D3DD->CreateUnorderedAccessView( pTexture, &viewDescUAV, &pTextureUAV ); //the getSRV function after dispatch. D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc ; ZeroMemory( &srvDesc, sizeof( srvDesc ) ); srvDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvDesc.Texture2D.MipLevels = 1; D3DD->CreateShaderResourceView( pTexture, &srvDesc, &pTextureSRV);

    Read the article

  • C# Cursor stuck on busy state

    - by Ben
    So I implemented a fixed time step loop for my C# game. All it does at the moment is make a square bounce around the screen. The problem I'm having is that when I execute the program, the window doesn't allow me to close the program and the cursor is stuck on the busy icon. I have to go into visual studio and stop the program manually. Here's the loop at the moment public void run() { int updates = 0; int frames = 0; double msPerTick = 1000.0 / 60.0; double threshhold = 0; long lastTime = getCurrentTime(); long lastTimer = getCurrentTime(); while (true) { long currTime = getCurrentTime(); threshhold += (currTime - lastTime) / msPerTick; lastTime = currTime; while (threshhold >= 1) { update(); updates++; threshhold -= 1; } this.Refresh(); frames++; if ((getCurrentTime() - lastTimer) >= 1000) { this.Text = updates + " updates and " + frames + " frames per second"; updates = 0; frames = 0; lastTimer += 1000; } } }

    Read the article

  • Permanently Sync a wiimote with a computer

    - by Adam Geisweit
    i have tried to look up many ways to sync up my wiimotes to my computer so that i can program games with it, but every time it only syncs them up temporarily, or if it says it can permanently sync it, it doesn't actually do it. it gets tiresome when i have to keep on reconnecting it every time i want to save battery life. how would i be able to sync up my wiimote to my computer so that if i turn off my wiimote, i can just hit any button and it will automatically sync it up?

    Read the article

  • Sorting objects before rendering

    - by dreta
    I'm trying to implement a scene graph and in all the articles i've come across there is talk about object sorting. So you'd sort your objects by "material" for example. Now untill i sat down and started implementing it, i kind of took this for granted, because it made sense. But now i'm wondering what does sorting actually change? In my engine, i have a manager for UBOs, i use those to store data that'll be shared between programs, at the moment that only involves time, camera and projection matrices and lights (i'm not worrying about managing which lights affect which objects ATM). Now for each model i have to change the model to world matrix uniform, no sorting is going to change that. So is the jump from changing this matrix to also setting a material for each object that bad? I vaguely remember reading somewhere that each time you change something in the pipeline, it has to get flushed and that can cause performance issues. But for each drawing call i'm setting up a model to world matrix anyway, so what sense does it make to ever be concerned about this? BTW is there any information about whether changing a uniform and calling glBufferSubData is more (or less) expensive.

    Read the article

  • Best gui toolkit to use for creating 3D board game

    - by UserInteractive
    I have created a board game using Java and Swing - using GridLayout and various other apis. It works properly but the UI looks very very simple. I would want couple of animations like tilting the GridLayoutat any angle. There are pawns on boxes of the GridLayout that I want to be animated when somebody clicks on it. I'm not sure of the right GUI toolkit to use for this. Swing repaint is possible to a limit and cannot be used for a lot of animation and graphics. And I realized after creating the game that Swing is probably not a good tool to create games. Could anybody suggest a better framework to use that I can use it in Eclipse with Java? I was thinking of JavaFX or tools like Adobe Flash or Adobe Air. Any suggestions please?

    Read the article

  • Developing games using virtualization on macOS (or Linux) [on hold]

    - by zpinner
    From what I've seen, most of the gamedev tools and engines (that could generate cross platform games) are not supported on Mac. Havok/Project Anarchy, UDK, GameMaker, e.g. . Basically, the only options I found are: Unity3d and monogame + xamarin. Unity is nice and I've been playing with it for some time, but the free version is quite limited when we're talking about shaders, that made me consider that as an indie developer, I might want more freedom to experiment new things, without paying the expensive unity license. I didn't try monogame + xamarin yet, and altough XNA is a very nice game framework, I'd like to have more freedom to experiment and finish a game first before paying for the IDE, which is not possible with the current Xamarin business model. That leaves me with the thought that I must go back to windows, which I'd preferably do it partially, if it's possible. Using BootCamp is something that I'd like to avoid, since it's a pain to reboot when changing OS and that would probably force me to become a 100% windows user. Is there anyone actually developing a game using virtualization solutions like parallels or vmwareFusion? How was your experience?

    Read the article

  • How can I select an audio output device in directshow

    - by Vibhore Tanwer
    I was wondering how I can select the output device for audio in directshow. I am able to get available audio output devices in directshow. But how can I make one of these to be audio output device. Its always going for the default audio device. I want to be able to output audio on my choice of device. I have been struggling through google but couldn't find anything useful. All I could get was this link but it doesn't really solve my problem. Any help will be really helpful for me.

    Read the article

  • How important is a single-player mode in a 2-player game?

    - by Davy8
    So say you have a 2 player game, taking Chess as an example (except it's an original game with no ready-to-go AI available). Let's say there's also a social-aspect to the meta-game, so let's say it's a Chess game on Facebook where you can challenge your friends. How important is it to have a single-player mode, knowing that an AI will need to be created (I've done minimax AI for tic tac toe, but nothing too sophisticated)? Is it important enough that it should be in the initial launch of the game? Can it wait for a future iteration (knowing that being hosted on the web means the game can be updated at any time)?

    Read the article

  • Parenting Opengl with Groups in LibGDX

    - by Rudy_TM
    I am trying to make an object child of a Group, but this object has a draw method that calls opengl to draw in the screen. Its class its this public class OpenGLSquare extends Actor { private static final ImmediateModeRenderer renderer = new ImmediateModeRenderer10(); private static Matrix4 matrix = null; private static Vector2 temp = new Vector2(); public static void setMatrix4(Matrix4 mat) { matrix = mat; } @Override public void draw(SpriteBatch batch, float arg1) { // TODO Auto-generated method stub renderer.begin(matrix, GL10.GL_TRIANGLES); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.end(); } } In my screen class I have this, i call it in the constructor MyGroupClass spriteLab = new MyGroupClass(spriteSheetLab); OpenGLSquare square = new OpenGLSquare(); square.setX0(100); square.setY0(200); square.setX1(400); square.setY1(280); square.color.set(Color.BLUE); square.setSize(); //spriteLab.addActorAt(0, clock); spriteLab.addActor(square); stage.addActor(spriteLab); And the render in the screen I have @Override public void render(float arg0) { this.gl.glClear(GL10.GL_COLOR_BUFFER_BIT |GL10.GL_DEPTH_BUFFER_BIT); stage.draw(); stage.act(Gdx.graphics.getDeltaTime()); } The problem its that when i use opengl with parent, it resets all the other chldren to position 0,0 and the opengl renderer paints the square in the exact position of the screen and not relative to the parent. I tried using batch.enableBlending() and batch.disableBlending() that fixes the position problem of the other children, but not the relative position of the opengl drawing and it also puts alpha to the glDrawing. What am i doing wrong?:/

    Read the article

  • Check for bodies within a specific circle in Box2D

    - by ltjax
    I'm trying to find positions to insert new bodies into my world. For that, I'd like to have a "free" spot where this body wouldn't overlap with anything else. So my plan was to sample "random" positions and check whether they overlap with my "potential" new body. Since my bodies are always circular, I'd need to test within a given circle. So far, the only way to use box2d for this seems to use b2World::QueryAABB around my circle and manually doing an overlap test with all the fixtures it gives me (Box2D doesn't event seem to allow me to tap into its overlapping tests?!). It seems to me like Box2D should already provide such functionality - is there a way that lets me do this without reinventing most of the wheel again?

    Read the article

  • 2D Animation Smoothness - Delta time vs. Kinematics

    - by viperld002
    I'm animating a sprite in 2D with key frames of rotation and xy-positions. I've recently had a discussion with someone saying that when the device (happens to be an iPad using cocos2D) hits a performance bump due to whatever else the user may be doing, lag will arise and that the best way to fight it is to not use actual positions, but velocities, accelerations and torques with kinematics. His message is to evaluate the positions and rotations from these speeds at the current point in time. I've never experienced a situation where I've heard of using kinematics to stem lag in 2D animations and am not sure of how effective it could be. Also, it seems to be overkill. The application is not networked so it's all running on a local device. The desired effect is that the animation always plays as closely as it can to the target frame rate. Wouldn't the technique suffer the same problems as just using the time since the last frame or a fixed time step since the kinematics would also require some time value to perform the calculation? What techniques could you suggest to best achieve the desired effect? EDIT 1 Thank you for your responses, they are very illuminating. I want to clarify my question before choosing an answer however, to make sure that this post really serves it's purpose. I have a sprite of a ball, and a text file with 3 arrays worth of information (rotation,translations x, translations y) with each unit of information existing as a key frame to be stepped through (0 to 49 and back to 0 to replay it again). I have this playing by interpolating from the current key frame to the next, every n-units of time. The animation is visibly correct when compared to a video I was given of it, and it is smooth because of the interpolations between the key frames. This is the existing state of the project. There are no physics simulated, only a static animation of a ball moving in a way an artist specifically designed. Should I, instead of rotation in degrees and translations by positions in space, derive velocities, accelerations and torques to express this static animation as a function of time? As in, position now = foo(time now), where foo uses kinematics.

    Read the article

  • how can i get rotation vector from matrix4x4 in xna?

    - by mr.Smyle
    i want to get rotation vector from matrix to realize some parent-children system for models. Matrix bonePos = link.Bone.Transform * World; Matrix m = Matrix.CreateTranslation(link.Offset) * Matrix.CreateScale(link.gameObj.Scale.X, link.gameObj.Scale.Y, link.gameObj.Scale.Z) * Matrix.CreateFromYawPitchRoll(MathHelper.ToRadians(link.gameObj.Rotation.Y), MathHelper.ToRadians(link.gameObj.Rotation.X), MathHelper.ToRadians(link.gameObj.Rotation.Z)) //need rotation vector from bone matrix here (now it's global model rotation vector) * Matrix.CreateFromYawPitchRoll(MathHelper.ToRadians(Rotation.Y), MathHelper.ToRadians(Rotation.X), MathHelper.ToRadians(Rotation.Z)) * Matrix.CreateTranslation(bonePos.Translation); link.gameObj.World = m; where : link - struct with children model settings, like position, rotation etc. And link.Bone - Parent Bone

    Read the article

  • Marmalade SDK views navigation concepts

    - by Mina Samy
    I want to develop a simple multi-Activity Android game using Marmalade SDK. I looked at the SDK samples and found that most of them are a single view (Android Activity) apps. The Navigation model in Android is a stack-like model where views are pushed and popped. I can't find whether Marmalade has a similar model or it uses a different approach. would you please explain this to me and provide tutorials and samples to navigation in Marmalade if possible ? Thanks

    Read the article

  • Slerping rotation mirrors

    - by Esa
    I rotate my game character to watch at the target using the following code: transform.rotation = Quaternion.Slerp(startQuaternion, lookQuaternion, turningNormalizer*turningSpeed/10f) startQuaternion is the character's current rotation when a new target is given. lookQuaternion is the direction the character should look at and it's set like this: destinationVector = currentWaypoint.transform.position - transform.position; lookQuaternion = Quaternion.LookRotation(destinationVector, Vector3.up); turningNormalizer is just Time.deltaTime incremented and turningSpeed is a static value given in the editor. The problem is that while the character turns as it should most of the time, it has problems when it has to do close to 180 degrees. Then it at times jitters and mirrors the rotation: In this poorly drawn image the character(on the right) starts to turn towards the circle on the left. Instead of just turning either through left or right it starts this "mirror dance": It starts to rotate towards the new facing Then it suddenly snaps to the same angle but on other side and keeps rotating It does this "mirroring" so long until it looks at the target. Is this a thing with quaternions, slerping/lerping or something else?

    Read the article

  • Adaptive Characters: AI Solution Needs a Problem

    - by Roger F. Gay
    Have sophisticated adaptive programming, will travel - so to speak. I'm part of a group that developed sophisticated learning / adaptive software for robotics. The system "thinks" via its simulator, building and adapting code on its own; and then carries out the best solution. The software can also adapt to new situations, etc. http://mensnewsdaily.com/2007/05/16/robobusiness-robots-with-imagination/ It's easy to imagine using it with automated game characters that will adapt to the players moves and style - the easiest example would be fighting. The more the simulated fighter fights with the human player, the more it learns to counter that players fighting skills. But there should be more. Anyone have any ideas as to how adaptive characters might be interesting in games?

    Read the article

  • How do I simulate the mouse and keyboard using C# or C++?

    - by Art
    I want to start develop for Kinect, but hardest theme for it - how to send keyboard and mouse input to any application. In previous question I got an advice to develop my own driver for this devices, but this will take a while. I imagine application like a gate, that can translate SendMessage's into system wide input or driver application with API to send this inputs. So I wonder, is there are drivers or simulators that can interact with C# or C++? Small edition: SendMessage, PostMessage, keybd_event will work only on Windows application with common messages loop. So I need driver application that will work on low, kernel, level.

    Read the article

  • How to remove a box2d body when collision happens?

    - by Ayham
    I’m still new to java and android programming and I am having so much trouble Removing an object when collision happens. I looked around the web and found that I should never handle removing BOX2D bodies during collision detection (a contact listener) and I should add my objects to an arraylist and set a variable in the User Data section of the body to delete or not and handle the removing action in an update handler. So I did this: First I define two ArrayLists one for the faces and one for the bodies: ArrayList<Sprite> myFaces = new ArrayList<Sprite>(); ArrayList<Body> myBodies = new ArrayList<Body>(); Then when I create a face and connect that face to its body I add them to their ArrayLists like this: face = new AnimatedSprite(pX, pY, pWidth, pHeight, this.mBoxFaceTextureRegion); Body BoxBody = PhysicsFactory.createBoxBody(mPhysicsWorld, face, BodyType.DynamicBody, objectFixtureDef); mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector(face, BoxBody, true, true)); myFaces.add(face); myBodies.add(BoxBody); now I add a contact listener and an update handler in the onloadscene like this: this.mPhysicsWorld.setContactListener(new ContactListener() { private AnimatedSprite face2; @Override public void beginContact(final Contact pContact) { } @Override public void endContact(final Contact pContact) { } @Override public void preSolve(Contact contact,Manifold oldManifold) { } @Override public void postSolve(Contact contact,ContactImpulse impulse) { } }); scene.registerUpdateHandler(new IUpdateHandler() { @Override public void reset() { } @Override public void onUpdate(final float pSecondsElapsed) { } }); My plan is to detect which two bodies collided in the contact listener by checking a variable from the user data section of the body, get their numbers in the array list and finally use the update handler to remove these bodies. The questions are: Am I using the arraylist correctly? How to add a variable to the User Data (the code please). I tried removing a body in this update handler but it still throws me NullPointerException , so what is the right way to add an update handler and where should I add it. Any other advices to do this would be great. Thanks in advance.

    Read the article

  • Making a game with responsive resolution

    - by alexandervrs
    I am making a game, however I wish for it to be resolution agnostic. My target resolution i.e. where things look as intended is 1600 x 900. My ideas are: Make the HUD stay fixed to the sides no matter what resolution, use different size for HUD graphics under a certain resolution and another under a certain large one. Use large HD sprites/backgrounds which are a power of 2, so they scale nicely. Use the player's native resolution. Scale the game area (not the HUD) to fit (resulting zooming in some and cropping the game area sides if necessary for widescreen, no stretch), but always fill the screen. Have a min and max resolution limit for small and very large displays where you will just change the resolution(?) or scale up/down to fit. What I am a bit confused though is what math formula I would use to scale the game area correctly based on the resolution no matter the aspect ratio, fully fit in a square screen and with some clip to the sides for widescreen. Pseudocode would help as well. :)

    Read the article

  • HLSL problem with divide by homogeneous component

    - by Berend
    When I try to divide my position.z by my position.w in HLSL I get as result always 1.0f or higher. Is this a common problem for some reason? When I divide my position.x or y by the w this works fine. But the divide for the z gives a wrong result. I use the view matrix for my camera and the projection matrix as i use it in the game because I want to create a depthmap from the cameraposition. Can anybody explain what I'm doing wrong? Do I need another view matrix?

    Read the article

  • Change alpha to a Frame in libgdx

    - by Rudy_TM
    I have this batch.draw(currentFrame, x, y, this.parent.originX, this.parent.originY, this.parent.width, this.parent.height, this.scaleX, this.scaleY,this.rotation); I want to apply the alpha that it gets from the method, but theres is not overload from the SpriteBatch class that takes the alpha value, is there some wey to apply it? (i did it this way, because this are animation, and i wanted to control them) in my static ones i apply sprite.draw(SpriteBatch, alpha) Thanks

    Read the article

  • C# Collision test of a ship and asteriod, angle confusion

    - by Cherry
    We are trying to to do a collision detection for the ship and asteroid. If success than it should detect the collision before N turns. However it is confused between angle 350 and 15 and it is not really working. Sometimes it is moving but sometime it is not moving at all. On the other hand, it is not shooting at the right time as well. I just want to ask how to make the collision detection working??? And how to solve the angle confusion problem? // Get velocities of asteroid Console.WriteLine("lol"); // IF equation is between -2 and -3 if (equation1a <= -2) { // Calculate no. turns till asteroid hits float turns_till_hit = dx / vx; // Calculate angle of asteroid float asteroid_angle_rad = (float)Math.Atan(Math.Abs(dy / dx)); float asteroid_angle_deg = (float)(asteroid_angle_rad * 180 / Math.PI); float asteroid_angle = 0; // Calculate angle if asteroid is in certain positions if (asteroid.Y > ship.Y && asteroid.X > ship.X) { asteroid_angle = asteroid_angle_deg; } else if (asteroid.Y < ship.Y && asteroid.X > ship.X) { asteroid_angle = (360 - asteroid_angle_deg); } else if (asteroid.Y < ship.Y && asteroid.X < ship.X) { asteroid_angle = (180 + asteroid_angle_deg); } else if (asteroid.Y > ship.Y && asteroid.X < ship.X) { asteroid_angle = (180 - asteroid_angle_deg); } // IF turns till asteroid hits are less than 35 if (turns_till_hit < 50) { float angle_between = 0; // Calculate angle between if asteroid is in certain positions if (asteroid.Y > ship.Y && asteroid.X > ship.X) { angle_between = ship_angle - asteroid_angle; } else if (asteroid.Y < ship.Y && asteroid.X > ship.X) { angle_between = (360 - Math.Abs(ship_angle - asteroid_angle)); } else if (asteroid.Y < ship.Y && asteroid.X < ship.X) { angle_between = ship_angle - asteroid_angle; } else if (asteroid.Y > ship.Y && asteroid.X < ship.X) { angle_between = ship_angle - asteroid_angle; } // If angle less than 0, add 360 if (angle_between < 0) { //angle_between %= 360; angle_between = Math.Abs(angle_between); } // Calculate no. of turns to face asteroid float turns_to_face = angle_between / 25; if (turns_to_face < turns_till_hit) { float ship_angle_left = ShipAngle(ship_angle, "leftKey", 1); float ship_angle_right = ShipAngle(ship_angle, "rightKey", 1); float angle_between_left = Math.Abs(ship_angle_left - asteroid_angle); float angle_between_right = Math.Abs(ship_angle_right - asteroid_angle); if (angle_between_left < angle_between_right) { leftKey = true; } else if (angle_between_right < angle_between_left) { rightKey = true; } } if (angle_between > 0 && angle_between < 25) { spaceKey = true; } } }

    Read the article

  • 2D SAT Collision Detection not working when using certain polygons (With example)

    - by sFuller
    My SAT algorithm falsely reports that collision is occurring when using certain polygons. I believe this happens when using a polygon that does not contain a right angle. Here is a simple diagram of what is going wrong: Here is the problematic code: std::vector<vec2> axesB = polygonB->GetAxes(); //loop over axes B for(int i = 0; i < axesB.size(); i++) { float minA,minB,maxA,maxB; polygonA->Project(axesB[i],&minA,&maxA); polygonB->Project(axesB[i],&minB,&maxB); float intervalDistance = polygonA->GetIntervalDistance(minA, maxA, minB, maxB); if(intervalDistance >= 0) return false; //Collision not occurring } This function retrieves axes from the polygon: std::vector<vec2> Polygon::GetAxes() { std::vector<vec2> axes; for(int i = 0; i < verts.size(); i++) { vec2 a = verts[i]; vec2 b = verts[(i+1)%verts.size()]; vec2 edge = b-a; axes.push_back(vec2(-edge.y,edge.x).GetNormailzed()); } return axes; } This function returns the normalized vector: vec2 vec2::GetNormailzed() { float mag = sqrt( x*x + y*y ); return *this/mag; } This function projects a polygon onto an axis: void Polygon::Project(vec2* axis, float* min, float* max) { float d = axis->DotProduct(&verts[0]); float _min = d; float _max = d; for(int i = 1; i < verts.size(); i++) { d = axis->DotProduct(&verts[i]); _min = std::min(_min,d); _max = std::max(_max,d); } *min = _min; *max = _max; } This function returns the dot product of the vector with another vector. float vec2::DotProduct(vec2* other) { return (x*other->x + y*other->y); } Could anyone give me a pointer in the right direction to what could be causing this bug? Edit: I forgot this function, which gives me the interval distance: float Polygon::GetIntervalDistance(float minA, float maxA, float minB, float maxB) { float intervalDistance; if (minA < minB) { intervalDistance = minB - maxA; } else { intervalDistance = minA - maxB; } return intervalDistance; //A positive value indicates this axis can be separated. } Edit 2: I have recreated the problem in HTML5/Javascript: Demo

    Read the article

  • How to use OpenGL functions from multiples thread?

    - by Robert
    I'm writing a small game using OpenGL. I'm implementing basic networking in this game and I'm facing a problem. I have a thread in my client socket class that check for available data, when there are data I raise an event like this : immutable int len = this.m_socket.receive(data); if(len > 0) { this.m_onDataEvent(data); } Then on my game class, I have a function that handle and parse data like this : switch(msgId) { case ProtocolID.CharacterData: // Load terrain with opengl, character model.... Im not able to call opengl functions because my opengl context is created from a different thread. But I really don't know how I can solve this problem, I tried Google but it's really hard to find a solution. I'm using D programming language if it can help.

    Read the article

< Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >