Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 406/774 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • c++ and SDL: How would I add tile layers with my area class as a singleton?

    - by Tony
    I´m trying to wrap my head around how to get this done, if at all possible. So basically I have a Area class, Map class and Tile class. My Area class is a singleton, and this is causing some confusion. I´m trying to draw like this: Background / Tiles / Entities / Overlay Tiles / UI. void C_Application::OnRender() { // Fill the screen black SDL_FillRect( Surf_Screen, &Surf_Screen->clip_rect, SDL_MapRGB( Surf_Screen->format, 0x00, 0x00, 0x00 ) ); // Draw background // Draw tiles C_Area::AreaControl.OnRender(Surf_Screen, -C_Camera::CameraControl.GetX(), -C_Camera::CameraControl.GetY()); // Draw entities for(unsigned int i = 0;i < C_Entity::EntityList.size();i++) { if( !C_Entity::EntityList[i] ) { continue; } C_Entity::EntityList[i]->OnRender( Surf_Screen ); } // Draw overlay tiles // Draw UI // Update the Surf_Screen surface SDL_Flip( Surf_Screen); } Would be nice if someone could give a little input. Thanks.

    Read the article

  • Player position triggering teleports

    - by jSherz
    I'm developing a Minecraft plugin (bukkit) in which a server admin can create 'portals' - a small region that will teleport any players who enter it. I have the teleportation sorted and I know how I could define areas that the player's position could be tested against. This would involve an ArrayList containing the zones and then hooking the PlayerMoveEvent so that the ArrayList is searched each time for a matching portal region. Although this method would work, I doubt that it would be very efficient when 100+ players are all moving around at the same time. Is there a better way of checking a player position against a set of 'zones' / regions?

    Read the article

  • How to render a texture partly transparent?

    - by megamoustache
    Good Morning StackOverflow, I'm having a bit of a problem right now as I can't seem to find a way to render part of a texture transparently with openGL. Here is my setting : I have a quad, representing a wall, covered with this texture (converted to PNG for uploading purposes). Obviously, I want the wall to be opaque, except for the panes of glass. There is another plane behind the wall which is supposed to show a landscape. I want to see the landscape from behind the window. Each texture is a TGA with alpha channel. The "landscape" is rendered first, then the wall. I thought it would be sufficient to achieve this effect but apparently it's not the case. The part of the window supposed to be transparent is black and the landscape only appears when I move past the wall. I tried to fiddle with GLBlendFunc() after having enabled it but it doesn't seem to do the trick. Am i forgetting an important step ? Thank you :)

    Read the article

  • Help me choose an engine

    - by Gjorgji
    So far i've been trying to make a RTS in pygame but, i feel like 2d is not enough and pygame has me do a lot of stuff that i would not like doing. What i would like doing is working on the AI gameplay and such and not worying too much about how to display stuff,physics and the like too much. So far Unity has boo which is supposed to be similar to python i wonder if that could work. How similar is it to python should i use this? Other options as far as i can see are ogre3d python bindings and UDK. Which would best suit my needs?

    Read the article

  • Knockback enemy based off of direction sprite is facing

    - by pengume
    Hey Everyone, Today I am trying to make it so if I hit the enemy then the enemy well be knocked backwards in the direction the sprite is facing. I am rotating the sprite around 360 degrees using a joystick on the screen and wanted to know the best practice or ways to accomplish this. I have come up with a few ideas but none of them make use of the sprites angle he is facing just a check to see if I hit the bottom then move him upward and so forth. I am just stumped on how to apply the sprites angle to the enemies x and y coordinate and move him accordingly. Has anyone tried this and have suggestions or things to look for? Thanks in advance.

    Read the article

  • Scaling sprite velocity / co-ordinatesin Android

    - by user22241
    I'm trying to find the answer to a question that I've had for a long time, but am having trouble finding it! I hope someone can help :-) I'm trying to find information on how to scale sprite velocity / movement / co-ordinates. What I mean by this is how do I get a sprite to move at the same speed relative to the screen size / DPI so that it takes the same amount of real-time to get from one side of the screen to the other? All of the posts pertaining to sprite scaling that I can find on the various forums relate to the size of the sprite, but this part of it I'm OK with so far, it's just that when I move a sprite, it kind of gets there at different speed depending on the dpi / resolution of the device. I hope I'm making sense. This is the code I have so far, instead of using explicit amounts, like 1, I'm using something like the following: platSpeedFloat= (1 * (dpi/160)); //Use '1' so on an MDPI screen, the sprite will move by 1 physical pixel Then basically what I'm doing is something like this: (all varialble previously declared) platSpeedSave+=platSpeedFloat; //Add the platSpeedFloat value to the current platSpeedSave value platSpeed=(int) platSpeedSave; //Cast to int so it can be checked in the following statement if (platSpeed==platSpeedSave) //Check the casted int value to float value stored previoiusly {floorY=floorY-platSpeed; //If they match then change the Y value platSpeedSave=0;} //Reset Would be grateful if someone could assists - hope I'm making sense. The above doesn't seems to work the sprite moves 'faster' on lower DPI screens. Thanks

    Read the article

  • How can I track a falling ball with a camera?

    - by Jason
    I have been trying to get my camera to follow a falling ball but with no success. here is the code float cameraY = (FrustumHeight / 2)+((ball.getPosition().y) /2) - (FrustumHeight /2); if (cameraY < FrustumHeight/2 ) cameraY = FrustumHeight/2; camera.position.set(0f,cameraY, 0f); Gdx.app.log("test",camera.position.toString()); camera.update(); camera.apply(Gdx.gl10); batch.setProjectionMatrix(camera.combined); batch.begin(); batch.draw(backgroundRegion, camera.position.x - FrustumWidth / 2, -cameraY - (FrustumHeight/2) , 320, 480); batch.draw(ballTexture, (camera.position.x - FrustumWidth / 2) + ball.getPosition().x,-cameraY + ball.getPosition().y - (FrustumHeight/2) , 32, 32); I'm sure I am doing this completely wrong - what is the correct way to do this?

    Read the article

  • Determining whether two fast moving objects should be submitted for a collision check

    - by dreta
    I have a basic 2D physics engine running. It's pretty much a particle engine, just uses basic shapes like AABBs and circles, so no rotation is possible. I have CCD implemented that can give accurate TOI for two fast moving objects and everything is working smoothly. My issue now is that i can't figure out how to determine whether two fast moving objects should even be checked against each other in the first place. I'm using a quad tree for spacial partitioning and for each fast moving object, i check it against objects in each cell that it passes. This works fine for determining collision with static geometry, but it means that any other fast moving object that could collide with it, but isn't in any of the cells that are checked, is never considered. The only solution to this i can think of is to either have the cells large enough and cross fingers that this is enough, or to implement some sort of a brute force algorithm. Is there a proper way of dealing with this, maybe somebody solved this issue in an efficient manner. Or maybe there's a better way of partitioning space that accounts for this?

    Read the article

  • CreateDXGIFactory Doesn't Let Program Exit

    - by smoth190
    I'm using CreateDXGIFactory to get the graphics adapters and display modes. When I call it, it works fine and I get all the data. However, when I exit my program, the main Win32 thread exits, but something stays open because it keeps debugging. Does CreateDXGIFactory create an extra thread and I'm not closing it? I don't understand. The only thing I would suspect is that in the documentation it says it doesn't work if it's called from DllMain. It is in a DLL, but it's not called from DllMain. And it doesn't fail, either. I'm using DirectX 11.

    Read the article

  • What is the best way to manage large 3d worlds (i.e minecraft style)?

    - by SomeXnaChump
    After playing minecraft I was marvelling a bit at their large worlds but at the same time finding it extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume its fairly slow because: A) Its written in Java, and as most of the actual spatial partitioning and other memory management activities happen in there it would be slower than a native C++ version. B) They are not partitioning their world very well I could be wrong on both assumptions, however it got me thinking about the best way to manage large worlds. As it is more of a true 3d world, where a block can exist in any part of the world, it is basically a big 3d array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc). Now I am assuming to make this sort of world performant you would need to: a) Use a tree of some variety (oct/kd/bsp) to split all the cubes out, it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. b) Use some algorithm to work out if the blocks within the scene can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. c) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees I guess there is no right answer to this, but I would be interested to see peoples opinions on the subject.

    Read the article

  • IndexOutOfRangeException on World.Step after enabling/disabling a Farseer physics body?

    - by WilHall
    Earlier, I posted a question asking how to swap fixtures on the fly in a 2D side-scroller using Farseer Physics Engine. The ultimate goal being that the player's physical body changes when the player is in different states (I.e. standing, walking, jumping, etc). After reading this answer, I changed my approach to the following: Create a physical body for each state when the player is loaded Save those bodies and their corresponding states in parallel lists Swap those physical bodies out when the player state changes (which causes an exception, see below) The following is my function to change states and swap physical bodies: new protected void SetState(object nState) { //If mBody == null, the player is being loaded for the first time if (mBody == null) { mBody = mBodies[mStates.IndexOf(nState)]; mBody.Enabled = true; } else { //Get the body for the given state Body nBody = mBodies[mStates.IndexOf(nState)]; //Enable the new body nBody.Enabled = true; //Disable the current body mBody.Enabled = false; //Copy the current body's attributes to the new one nBody.SetTransform(mBody.Position, mBody.Rotation); nBody.LinearVelocity = mBody.LinearVelocity; nBody.AngularVelocity = mBody.AngularVelocity; mBody = nBody; } base.SetState(nState); } Using the above method causes an IndexOutOfRangeException when calling World.Step: mWorld.Step(Math.Min((float)nGameTime.ElapsedGameTime.TotalSeconds, (1f / 30f))); I found that the problem is related to changing the .Enabled setting on a body. I tried the above function without setting .Enabled, and there was no error thrown. Turning on the debug views, I saw that the bodies were updating positions/rotations/etc properly when the state was changes, but since they were all enabled, they were just colliding wildly with each other. Does Enabling/Disabling a body remove it from the world's body list, which then causes the error because the list is shorter than expected? Update: For such a straightforward issue, I feel this question has not received enough attention. Has anyone else experienced this? Would anyone try a quick test case? I know this issue can be sidestepped - I.e. by not disabling a body during the simulation - but it seems strange that this issue would exist in the first place, especially when I see no mention of it in the documentation for farseer or box2d. I can't find any cases of the issue online where things are more or less kosher, like in my case. Any leads on this would be helpful.

    Read the article

  • Looking for feedback on design pattern for simple 2D environment

    - by Le Mot Juiced
    I'm working in iOS. I am trying to make a very simple 2D environment where there are some basic shapes you can drag around with your finger. These shapes should interact in various ways when dropped on each other, or when single-tapped versus double-tapped, etc. I don't know the name for the design pattern I'm thinking of. Basically, you have a bunch of arrays named after attributes, such as "double-tappable" or "draggable" or "stackable". You assign these attributes to the shapes by putting the shapes in the arrays. So, if there's a double-tap event, the code gets the location of it, then iterates through the "double-tappable" array to see if any of its members are in that location. And so on: every interactive event causes a scan through the appropriate array or arrays. It seems like that should work, but I'm wondering if there's a better pattern for the purpose.

    Read the article

  • How to build a "traffic AI"?

    - by Lunikon
    A project I am working on right now features a lot of "traffic" in the sense of cars moving along roads, aircraft moving aroun an apron etc. As of now the available paths are precalculated, so nodes are generated automatically for crossings which themselves are interconnected by edges. When a character/agent spawns into the world it starts at some node and finds a path to a target node by means of a simply A* algorithm. The agent follows the path and ultimately reaches its destination. No problem so far. Now I need to enable the agents to avoid collisions and to handle complex traffic situations. Since I'm new to the field of AI I looked up several papers/articles on steering behavior but found them to be too low-level. My problem consists less of the actual collision avoidance (which is rather simple in this case because the agents follow strictly defined paths) but of situations like one agent leaving a dead-end while another one wants to enter exactly the same one. Or two agents meeting at a bottleneck which only allows one agent to pass at a time but both need to pass it (according to the optimal route found before) and they need to find a way to let the other one pass first. So basically the main aspect of the problem would be predicting traffic movement to avoid dead-locks. Difficult to describe, but I guess you get what I mean. Do you have any recommendations for me on where to start looking? Any papers, sample projects or similar things that could get me started? I appreciate your help!

    Read the article

  • STL for games, yea or nay?

    - by munificent
    Every programming language has its standard library of containers, algorithms, and other helpful stuff. With languages like C#, Java, and Python, it's practically inconceivable to use the language without its standard lib. Yet, on many C++ games I've worked on, we either didn't use the STL at all, used a tiny fraction of it, or used our own implementation. It's hard to tell if that was a sound decision for our games, or one simply made out of ignorance of the STL. So... is the STL a good fit or not?

    Read the article

  • OpenGL: Drawing to a texture

    - by Danran
    Well im just a bit stuck wondering how to draw an item to a texture. Specifically, i'm using; glDrawArrays(GL_LINE_STRIP, indices[0], indices.size()); Because what i'm drawing via the above function updates every-frame, i'm just totally not sure how to go about drawing what i have to a texture. Any help is greatly appreciated! Edit: Well unfortunately my graphics card doesn't support FrameBuffer Objects :/. So i've been trying to get the copy contents from backbuffer method working. Here's what i currently have; http://pastebin.com/dJpPt6Pd And sadly all i get is a white square. Its probably something stupid that i'm doing wrong. Just unsure what it could be?

    Read the article

  • How do I make a dialog box? [on hold]

    - by bill
    By dialog box I mean when player talks to someone, a box shows up with text on it. I haven't found much about this topic online, so I created a basic dialog box: //in dialog box i have only two methods public void createBox(int x, int y, int width, int height, String txt) { this.x = x; this.y = y; this.width = width; this.height = height; this.txt = txt; } //draw dialog box public void draw(Graphics2D g) { if (txt != null) { g.setColor(Color.red); g.drawRect(x,y,width,height); g.setColor(Color.black); g.fillRect(x, y, width, height); g.setColor(Color.white); g.drawString(txt, x + 10, y + 10); } } I wanted to now can I make this better?

    Read the article

  • is this the correct way to use glTexCoordPointer?

    - by RubyKing
    Hey all Just trying to work out how to use this function glTexCoordPointer. Here is the man pages http://www.opengl.org/sdk/docs/man/xhtml/glTexCoordPointer.xml which states that I must set a pointer to the first element of the array that uses the texture cordinate. Here is my array static const GLfloat GUIVertices[] = { //FIRST QUAD 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, -1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f, 0.94f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.94f, 0.0f, 1.0f, 1.0f, 1.0f, //2ND QUAD // x y z w X Y 1.0f, -1.0f, 0.0f, 1.0f, 1.0f, 0.0f, -1.0f,-1.0f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f,-0.94f, 0.0f,1.0f, 0.0f, 1.0f, 1.0f, -0.94f, 0.0f,1.0f, 1.0f, 1.0, }; But how do I set the pointer correctly? like this glTexCoordPointer(1, GL_FLOAT, 6, reinterpret_cast(29 * sizeof(float)) ); for the fifth element on the 2nd quad first row. any help is thankful

    Read the article

  • How to properly code in Unity? [on hold]

    - by Vincent B.
    I'm fairly new to Unity (yet I touched it and made a few proto with it) and I'd like to know how I'm supposed to work with it. I'm student in programming so I'm used to C/C++ with SDL/SFML, writing code and only using Input/Graphics/Network libs. I followed a few Unity guides and it was way more around drag & drop on scenes and a bit of scripting to activate it all, which disturbed me. So I fond a way to only use one GameObject and use a Singleton to launch code and display stuff (for 2d games at least). At the end of the day I make games not using "Instantiate" or such at all. Is it the right way ? Am I supposed to do this ? How much are your scenes populated (in a professional environment) ? When should I stop coding and start using the editor ?

    Read the article

  • Fog with Blend in OpenGL

    - by MhdAljobory
    I want to add fog in my scene which contain transparent textures made by Blend , when i enable the fog the transparent textures appear white From a distance but when i disable it the textures appear well. What is the solution to the problem of whiteness? Fog Code: GLfloat fogColor[4]= {0.5f, 0.5f, 0.5f, 1.0f}; glClearColor(0.5f,0.5f,0.5f,1.0f); glFogi(GL_FOG_MODE, GL_LINEAR); glFogfv(GL_FOG_COLOR, fogColor); glFogf(GL_FOG_DENSITY, 0.35f); glHint(GL_FOG_HINT, GL_DONT_CARE); glFogf(GL_FOG_START, 1.0f); glFogf(GL_FOG_END, 1000.0f); glEnable(GL_FOG); Screenshot

    Read the article

  • Do I need a Point and a Vector object? Or just using a Vector object to represent a Point is ok?

    - by JCM
    Structuring the components of an engine that I am developing along with a friend (learning purposes), I came to this doubt. Initially we had a Point constructor, like the following: var Point = function( x, y ) { this.x = x; this.y = y; }; But them we started to add some Vector math to it, and them decided to rename it to Vector2d. But now, some methods are a bit confusing (at least in my opinion), such as the following, which is used to make a line: //before the renaming of Point to Vector2, the parameters were startingPoint and endingPoint Geometry.Line = function( startingVector, endingVector ) { //... }; I should make a specific constructor for the Point object, or there are no problems in defining a point as a vector? I know a vector have magnitude and direction, but I see so many people using a vector to just represent the position of an object.

    Read the article

  • Why do I have to divide the origin of a quad by 4 instead of 2?

    - by vinzBad
    I'm currently transitioning from C#/XNA to C#/OpenTK but I'm getting stuck at the basics. So I have this Sprite-Class: public static bool EnableDebugDraw = true; public float X; public float Y; public float OriginX = 0; public float OriginY = 0; public float Width = 0.1f; public float Height = 0.1f; public Color TintColor = Color.Red; float _layerDepth = 0f; public void Render() { Vector2[] corners = { new Vector2(X-OriginX,Y-OriginY), //top left new Vector2(X +Width -OriginX,Y-OriginY),//top right new Vector2(X +Width-OriginX,Y+Height-OriginY),//bottom rigth new Vector2(X-OriginX,Y+Height-OriginY)//bottom left }; GL.Color3(TintColor); GL.Begin(BeginMode.Quads); { for (int i = 0; i < 4; i++) GL.Vertex3(corners[i].X,corners[i].Y,_layerDepth); } GL.End(); if (EnableDebugDraw) { GL.Color3(Color.Violet); GL.PointSize(3); GL.Begin(BeginMode.Points); { for (int i = 0; i < 4; i++) GL.Vertex2(corners[i]); } GL.End(); GL.Color3(Color.Green); GL.Begin(BeginMode.Points); GL.Vertex2(X + OriginX, Y + OriginY); GL.End(); } With the following setup I try to set the origin of the quad to the middle of the quad. _sprite.OriginX = _sprite.Width / 2; _sprite.OriginY = _sprite.Height / 2; but this sets the origin to the upper right corner of the quad, so i have to _sprite.OriginX = _sprite.Width / 4; _sprite.OriginY = _sprite.Height / 4; However this is not the intended behaviour, could you advise me how I fix this?

    Read the article

  • How to update a mesh position base on a pressed key?

    - by steven166
    I have a mesh loaded from a file, like a tiger mesh. At the first time it locates at A position, then if I press a left key, it will moves to B position but the problem is if I press a left key one more time, it will move from B position to C position. It means that the amount I want to move the mesh will base on the current position instead of the first time rendering position. I can do it if I have a array vertices then I just update the vertex buffer, but a mesh loaded from a file does not have an array vertices, so how to do it? Anybody help me, please?

    Read the article

  • Is it safe to run multiple XNA ContentManager instances on multiple threads?

    - by Boinst
    My XNA project currently uses one ContentManager instance, and one dedicated background thread for loading all content. I wonder, would it be safe to have multiple ContentManager instances, each in it's own dedicated thread, loading different content at the same time? I'm prompted to ask this question because this article makes the following statement: If there are two textures created at the same time on different threads, they will clobber the other and you will end up with some garbage in the textures. I think that what the author is saying here, is that if I access one ContentManager simultaneously on two threads, I'll get garbage. But what if I have separate ContentManager instances for each thread? If no-one knows the answer already from experience, I'll go ahead and try it and see what happens.

    Read the article

  • Triangles in a C++ STL Vector as an Objective-C member sometimes draws incorrectly in OpenGL ES

    - by Rahil627
    The polygons draw correctly 80% of the time. When it fails, a vertex is dislocated. The polygon is consistently drawn with the wrong vertex. I checked that the vector is correct during initialization, even when it's wrongly drawn. I'm using Cocos2d. The class member: @interface Polygon : CCSprite { std::vector<float> triangleVertices; } The draw function called in [Polygon draw]: + (void)drawTrianglesWithVertices:(const std::vector<float> &)v { //glEnableClientState(GL_VERTEX_ARRAY); glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_COLOR_ARRAY); glVertexPointer(2, GL_FLOAT, 0, &v[0]); glDrawArrays(GL_TRIANGLES, 0, v.size()); //glDisableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); } Any ideas?

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >