Search Results

Search found 11995 results on 480 pages for 'clement game'.

Page 337/480 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Best way to get elapsed time in miliseconds in windows

    - by XaitormanX
    I'm trying to do it using two FILETIMEs, casting them to ULONGLONGs, substracting the ULONGLONGs, and dividing the result by 10000. But it's pretty slow, and I want to know if there is a better way to do it.I use c++ with visual studio 2008 express edition. This is what I'm using: FILETIME filetime,filetime2; GetSystemTimeAsFileTime(&filetime); Sleep(100); GetSystemTimeAsFileTime(&filetime2); ULONGLONG time1,time2; time1 = (((ULONGLONG) filetime.dwHighDateTime) << 32) + filetime.dwLowDateTime; time2 = (((ULONGLONG) filetime2.dwHighDateTime) << 32) + filetime2.dwLowDateTime; printf("ELAPSED TIME IN MS:%d",(int)((time2-time1)/10000));

    Read the article

  • Which jar has JBox2d's p5 package

    - by Brantley Blanchard
    Using eclipse, I'm trying to write a simple hello world program in processing that simply draws a rectangle on the screen then has gravity drop it as seen in this Tutorial. The problem is that when I try to import the p5 package, it's not resolving so I can't declare my Physics object. I tried two things. Download the zip, unzip it, then import the 3 jars (library, serialization, & testbed) a. import org.jbox2d.p5.*; doesn't resolve but the others do b. Physics physics; doesn't resolve Download the older standalone testbed jar then import it a. Physics physics; doesn't resolve; Here is basically where I'm starting import org.jbox2d.util.nonconvex.*; import org.jbox2d.dynamics.contacts.*; import org.jbox2d.testbed.*; import org.jbox2d.collision.*; import org.jbox2d.common.*; import org.jbox2d.dynamics.joints.*; import org.jbox2d.p5.*; import org.jbox2d.dynamics.*; import processing.core.PApplet; public class MyFirstJBox2d extends PApplet { Physics physics; public void setup() { size(640,480); frameRate(60); initScene(); } public void draw() { background(0); if (keyPressed) { //Reset everything physics.destroy(); initScene(); } } public void initScene() { physics = new Physics(this, width, height); physics.setDensity(1.0f); physics.createRect(300,200,340,300); } }

    Read the article

  • How to render a texture partly transparent?

    - by megamoustache
    Good Morning StackOverflow, I'm having a bit of a problem right now as I can't seem to find a way to render part of a texture transparently with openGL. Here is my setting : I have a quad, representing a wall, covered with this texture (converted to PNG for uploading purposes). Obviously, I want the wall to be opaque, except for the panes of glass. There is another plane behind the wall which is supposed to show a landscape. I want to see the landscape from behind the window. Each texture is a TGA with alpha channel. The "landscape" is rendered first, then the wall. I thought it would be sufficient to achieve this effect but apparently it's not the case. The part of the window supposed to be transparent is black and the landscape only appears when I move past the wall. I tried to fiddle with GLBlendFunc() after having enabled it but it doesn't seem to do the trick. Am i forgetting an important step ? Thank you :)

    Read the article

  • Depth Map resolution shifting

    - by user3669538
    the problem is with shadow mapping as you can see, actually it works fine but in a certain condition that the Depth Map size must be equal to the size of rendering buffer, I use an infinite directional light so if the window is 800x600 the depth map must be 800x600, and when i change the size of the shadow map to be 900x600 it starts to be shifted and when it's size be 1024x1024 it also shifts till it disappears the GLSL shadow function float calcShadow(sampler2D Dmap, vec4 coor){ vec4 sh = vec4((coor.xyz/coor.w),1); sh.z *= 0.9; return step(sh.z,texture2D(Dmap,sh.xy).r); } here's the result when it's the same size as the window Colored result & Depth Map and here's the shifted result, as you can notice the depth map is exactly as the previous one with the addition of white space to the right. Colored result http://goo.gl/5lYIFV Depth Map http://goo.gl/7320Dd

    Read the article

  • Having a list of rooms with theirs connection to each other, how do I find isolated room groups?

    - by petervaz
    I'm trying to create a small roguelike and went as far as random generating rooms and corridors. Each room is an instanced object and contain an arraylist of the others rooms connected by a corridor. I can single out unconnected rooms but how can I know the rooms that are connected only to each other but not to most of the others, forming an island? to illustrated better the problem here is an image from the console on a bogged level. Rooms 5 and 6 are connected only to each other. What algorithm can I use to detect that?

    Read the article

  • 3D Box Collision Data Import

    - by cboe
    I'm trying to implement a collision system using oriented bounding boxes, using a center for the box, it's extents as a 3D Vector and a rotation matrix, which is all stuff I picked up online and seem to be somewhat the standard. Detecting the center is no problem so I'm gonna leave these out here. My problem however is importing the data from a 3D file. Say I've placed a box with 2 units length on each side aligned to the world axis. The logic results here are extents of 1,1,1 and I use an identity matrix for rotation - easy. However I'm stuck when I rotate the box in the 3D program, say 30 degrees each axis. How would I parse the box? I only have these 8 vertices as information, and I guess what I would need to do is to find out the rotation of said box, apply it to the vertices so they are aligned to world axes and then calculate the extents out of that. How do I get the rotation of the box when I only have the vertex information of the box available?

    Read the article

  • How to update a mesh position base on a pressed key?

    - by steven166
    I have a mesh loaded from a file, like a tiger mesh. At the first time it locates at A position, then if I press a left key, it will moves to B position but the problem is if I press a left key one more time, it will move from B position to C position. It means that the amount I want to move the mesh will base on the current position instead of the first time rendering position. I can do it if I have a array vertices then I just update the vertex buffer, but a mesh loaded from a file does not have an array vertices, so how to do it? Anybody help me, please?

    Read the article

  • How to setup my texture cordinates correctly in GLSL 150 and OpenGL 3.3?

    - by RubyKing
    I'm trying to do texture mapping in GLSL 150 and OpenGL 3.3 Here are my shaders I've tried my best to get this correct as possible hopefully this is :) I'm guessing you want to know what the problem is well my texture shows but not in its fullest form just one section of it not the full texture on the quad. All I can think of is its the texture cordinates in the main.cpp which is at the bottom of this post. FRAGMENT SHADER #version 150 in vec2 Texcoord_VSPS; out vec4 color; // Values that stay constant for the whole mesh. uniform sampler2D myTextureSampler; //Main Entry Point void main() { // Output color = color of the texture at the specified UV color = texture2D( myTextureSampler, Texcoord_VSPS ); } VERTEX SHADER #version 150 //Position Container in vec3 position; //Container for TexCoords attribute vec2 Texcoord0; out vec2 Texcoord_VSPS; //out vec2 ex_texcoord; //TO USE A DIFFERENT COORDINATE SYSTEM JUST MULTIPLY THE MATRIX YOU WANT //Main Entry Point void main() { //Translations and w Cordinates stuff gl_Position = vec4(position.xyz, 1.0); Texcoord_VSPS = Texcoord0; } LINK TO MAIN.CPP http://pastebin.com/t7Vg9L0k

    Read the article

  • Android Java: Way to effectively pause system time while debugging?

    - by TheMaster42
    In my project, I call nanoTime and use that to get a deltaTime which I pass to my entities and animations. However, while debugging (for example, stepping through my code), the system time on my phone is happily chugging along, so it's impossible to look at, say, two sequential frames of data in the debugger (since by the time I'm done looking at the first frame, the system time has continued to move ahead by seconds or even minutes). Is there a programming practice or method to pause the system clock (or a way for my code to intercept and fake my deltaTime) whenever I pause execution from the debugger? Additional Information: I'm using Eclipse Classic with the ADT plugin and a Samsung SII, coding in Java. My code invoking nanoTime: http://pastebin.com/0ZciyBtN I do all display via a Canvas object (2D sprites and animations).

    Read the article

  • How to build a "traffic AI"?

    - by Lunikon
    A project I am working on right now features a lot of "traffic" in the sense of cars moving along roads, aircraft moving aroun an apron etc. As of now the available paths are precalculated, so nodes are generated automatically for crossings which themselves are interconnected by edges. When a character/agent spawns into the world it starts at some node and finds a path to a target node by means of a simply A* algorithm. The agent follows the path and ultimately reaches its destination. No problem so far. Now I need to enable the agents to avoid collisions and to handle complex traffic situations. Since I'm new to the field of AI I looked up several papers/articles on steering behavior but found them to be too low-level. My problem consists less of the actual collision avoidance (which is rather simple in this case because the agents follow strictly defined paths) but of situations like one agent leaving a dead-end while another one wants to enter exactly the same one. Or two agents meeting at a bottleneck which only allows one agent to pass at a time but both need to pass it (according to the optimal route found before) and they need to find a way to let the other one pass first. So basically the main aspect of the problem would be predicting traffic movement to avoid dead-locks. Difficult to describe, but I guess you get what I mean. Do you have any recommendations for me on where to start looking? Any papers, sample projects or similar things that could get me started? I appreciate your help!

    Read the article

  • Assigning an item to an existing array in a list within a dictionary [on hold]

    - by Rouke
    I have a Dictionary declared like: public var PoolDict : Dictionary.<String, List.<GameObject[]> >; I made a function to add items to the list and array function Add(key:String, obj:GameObject) { if(!PoolDict.ContainsKey(key)) { PoolDict[key] = new List.<GameObject[]>(); } //PlaceHolder - Not what will be in final version PoolDict[key].Add(null); //Attempts - Errors- How to add to existing array? PoolDict[key].Add(obj); PoolDict[key][0].Add(obj); } I'd like to replace the line after //PlaceHolder with code that will assign a gameObject to an existing array in a list that's associated with a key. How could this be done?

    Read the article

  • Are there any resources for motion-planning puzzle design?

    - by Salano Software
    Some background: I'm poking at a set of puzzles along the lines of Rush Hour/Sokoban/etc; for want of a better description, call them 'motion planning' puzzles - the player has to figure out the correct sequence of moves to achieve a particular configuration. (It's the sort of puzzle that's generically PSPACE-complete if that actually helps anyone's mental image). While I have a few straightforward 'building blocks' that I can use for puzzle crafting and I have a few basic examples put together, I'm trying to figure out how to avoid too much sameness over a large swath of these kinds of puzzles, and I'm also trying to figure out how to make puzzles that have more of a feel of logical solution than trial-and-error. Does anyone know of good resources out there for designing instances of this sort of puzzle once the core puzzle rules are in place? Most of what I've found on puzzle design only covers creating the puzzle rules, not building interesting puzzles out of a set of rules.

    Read the article

  • c++ and SDL: How would I add tile layers with my area class as a singleton?

    - by Tony
    I´m trying to wrap my head around how to get this done, if at all possible. So basically I have a Area class, Map class and Tile class. My Area class is a singleton, and this is causing some confusion. I´m trying to draw like this: Background / Tiles / Entities / Overlay Tiles / UI. void C_Application::OnRender() { // Fill the screen black SDL_FillRect( Surf_Screen, &Surf_Screen->clip_rect, SDL_MapRGB( Surf_Screen->format, 0x00, 0x00, 0x00 ) ); // Draw background // Draw tiles C_Area::AreaControl.OnRender(Surf_Screen, -C_Camera::CameraControl.GetX(), -C_Camera::CameraControl.GetY()); // Draw entities for(unsigned int i = 0;i < C_Entity::EntityList.size();i++) { if( !C_Entity::EntityList[i] ) { continue; } C_Entity::EntityList[i]->OnRender( Surf_Screen ); } // Draw overlay tiles // Draw UI // Update the Surf_Screen surface SDL_Flip( Surf_Screen); } Would be nice if someone could give a little input. Thanks.

    Read the article

  • Working Qt controls in a 3d environment

    - by Jay
    I need some advice from a Qt expert. The background: I have a 3D engine (ogre3d) working in concert with Qt. The 3D Content is displayed in a widget (using a custom OS window in the client area). I'm able to overlay arbitrary Qt widgets onto the 3d world using the widget render() method and a shared bitmap. This makes a great "heads up display". I can use the standard Qt style sheets and animation using this technique. My goal I'd like to go a step further and allow the user to move these rendered widgets using the mouse. I'd like some advice on the best way to implement this. Possible solutions: The widgets in the HUD are not part of the inheritance chain. I render them manually. They don't get events though. I could add them to the inheritance chain so they get events in the usual way. Then I would need to change them to render to my shared bitmap instead of to the operating system. I looked at this once but couldn't find enough information to implement it. Capture mouse events in the 3D display widget and EMIT them to child controls. I basically create my own event handling chain. Any suggestions on how to implement this? I'm also considering switching to Qt5. I'm not sure how that might affect this decision.

    Read the article

  • rigidbody2d.Addforce( ) behaves wieirdly unity 4.3 [on hold]

    - by Lilz Votca Love
    So guys ive edited the question and here is what my problem is i have a player which has a rigidbody2d attached to it.my player is able to doublejump in the air nicely and stick to walls when colliding with them and slowly slides to the ground.All movement is handle through physics and no transform manipulations.here i did something similar to this in the FixedUpdate of my player. void FixedUpdate() { if(wall && Input.GetButtonDown("Jump")) { if(facingright)//player is facing the left side of the wall { rigidbody2D.Addforce(new vector2(-1f,2f)*jumpforce); /*Now the player should jump backwards following this directional vector and should follow a smooth curve which in this part works well*/ } else { rigidbody2D.Addforce(new vector2(1f,2f)*jumpforce); /*Now this is where everything gets complicated as you should have noticed this is the same directional vector only the opposite x axis value and the same amount of force is used but it behaves like the red curve in the picture below*/ } } } bad behaviour and vector in red .I tested the same thing(both addforce methods) for a simple jump and they exactly behave like mentionned above in the picture.so here is my problem.Jumping diagonally forward with rigidbody2d.addforce() do not have the same impact,do not follow the same curve as jumping the opposite direction with the same exact amount of force.if i could fix this or get past this i could implement a walljump system like a ninja jumping in zigzag between two opposite wall to climb them.Any ideas or alternatives?

    Read the article

  • Rotating object around moving object/player in 2D

    - by Boston
    I am trying to implement a camera which rotates around the world around the player. I have found many solutions online to the task of rotating an object about the origin, or about an arbitrary point. The procedure seems to be to translate the point to be rotated about to the origin, perform the rotation, translate back, then draw. I have gotten this working for rotation around the origin as well as for a fixed point. Rotation of objects around the player works as well, provided the player does not move. However, if the objects are rotated around the player by some non-zero degree, if the player moves after the rotation it causes the rotated objects to move as well. I probably have done a poor job explaining this so here's an image: http://i.imgur.com/1n63iWR.gif And here's the code for the behavior: renderx = (Ox - Px)*cos(camAngle) - (Oy - Py)*sin(camAngle) + Px; rendery = (Ox - Px)*sin(camAngle) + (Oy - Py)*cos(camAngle) + Py; Where (Ox,Oy) is the actual position of the object to be rotated and (Px,Py) is the actual position of the player. Any ideas? I am using C++ with SDL2.0.

    Read the article

  • GPU optimization question: pre-computed or procedural?

    - by Jay
    Good morning, I'm learning shader program and need some general direction. I want to add noise to my laser beam (like this). Which is the best way to handle it? I could pre-compute an image and pass it to the shader. I could then use the image to change the opacity and easily animate the smoke by changing the offset of the texture lookup. I could also generate noise in the shader and do the same thing the texture was used for. Is it generally better to avoid I/O to the graphics card or the opposite? Thanks!

    Read the article

  • How to properly code in Unity? [on hold]

    - by Vincent B.
    I'm fairly new to Unity (yet I touched it and made a few proto with it) and I'd like to know how I'm supposed to work with it. I'm student in programming so I'm used to C/C++ with SDL/SFML, writing code and only using Input/Graphics/Network libs. I followed a few Unity guides and it was way more around drag & drop on scenes and a bit of scripting to activate it all, which disturbed me. So I fond a way to only use one GameObject and use a Singleton to launch code and display stuff (for 2d games at least). At the end of the day I make games not using "Instantiate" or such at all. Is it the right way ? Am I supposed to do this ? How much are your scenes populated (in a professional environment) ? When should I stop coding and start using the editor ?

    Read the article

  • Is there a prohibition against scaling collision shapes at runtime?

    - by Almo
    So, I have a StaticMeshComponent attached to an Actor: Begin Object Class=StaticMeshComponent Name=StaticMeshComponentObject StaticMesh=StaticMesh'QF_Art_Powers.Mesh.GP_ForcePush' CollideActors=true BlockActors=false //Scale3D=(X=5, Y=1.5, Z=3) // ALMODEBUG End Object CollisionComponent=StaticMeshComponentObject Components.Add(StaticMeshComponentObject) Ordinarily, the actor gets spawned, anything touching it gets bumped, and the actor despawns itself. If I set the Scale3D as a default property, everything works as I expect. But I want to scale it at runtime, like this: function SetImpulseComponentTemplate(QuadForceBoxImpulseComponent Value) { Local Vector ScaleVec; ScaleVec.X = Value.Length; ScaleVec.Y = Value.Width; ScaleVec.Z = Value.Height; CollisionComponent.SetScale3D(ScaleVec); } When I do this, the thing only collides as if it were not scaled. If I leave the actor spawned so I can see it, it is scaled. If I also "show collision", the collision displays correctly as well. Is there a prohibition against scaling collision shapes at runtime?

    Read the article

  • which one is the safe site to buy cheap wildstar gold?

    - by user50866
    Wsoplat.com is a professional online shop which provide the cheapest wildstar gold, the fastest delivery, the best 24/7 online service for the players. Secure Guarantee cheap wildstar gold offered by wsoplat.com are reliable sourced, safe and honored. Lowest Price We are constantly trying to offer the lowest prices on Wildstar Gold for our loyal customers. Covenient shopping procedure & secure delivery method We are very experienced in this business. Every order is processed both smoothly and efficiently. Friendly Service and Instant Delivery Wsoplat 's delivery department work 24/7/365, We have professional and friendly customer service operators, they can help you buy wildstar gold, accounts, items and more. 5% discount code to buy WildStar gold in wsoplat - WSOPLAT http://www.wsoplat.com/

    Read the article

  • Good resources for learning modern OpenGL (3.0 or later)?

    - by MatterGoal
    I stumble upon the search of a good resource to start with OpenGL (3.0 or later) . Well, I found a lot of books but none of them can be considered a good resource! Here two examples: OpenGL Programming Guide (7th edition) http://www.amazon.com/exec/obidos/ASIN/0321552628/khongrou-20 This is FULL of deprecated material! Almost every chapters begin with a note about that. OpenGL Superbible (5th Edition) http://www.amazon.com/exec/obidos/ASIN/0321712617/khongrou-20 This book uses a library created by the author to explain the main arguments, hiding what you want to learn! I don't want to learn how to use your library! I want to learn OpenGL! I hope that you understand this is not the same question like "hey I'm not able to use Google... tell me how to learn OpenGL". I've just finished a full and deep search but I can't find a good and complete resource to learn the "new" OpenGL avoiding deprecated topics. Can someone heading me in the right direction? I know C++ and I have 10 years of experience in development... where I can find a good resource?! I want to spend time on it and I want to learn deeply. (please feel free to edit my question, my English is terrible!)

    Read the article

  • Where in a typical rendering pipeline does visibility and shading occur?

    - by user29163
    I am taking a computer graphics course. The book and the lecture notes are vague on the on the order of flow between the different steps in the rendering process. For example, if we have specified a view in a scene, and then want to perform a projection transformation for that given view, then we have to go through a sequence of transformations. In the end we end up with a normalized "viewcube" ready to be mapped 2D after clipping. But why do we end up with a cube (ie 3D thing), when a projection results in projecting the 3D objects to 2D. (depth information is lost?) The other line of reasoning is that all information further needed is stored within the "cube" and that visibility detection and shading is performed with respect to this cube and then we perform rasterezation.

    Read the article

  • OpenGL : Keeping alpha in a render buffer

    - by Cyan
    In my current task, i need to render a texture into a render buffer, in order to work on it (apply special filters) there. The result is then considered a "new texture", which is later displayed. This works fine, except when the texture contains some transparent/semi-transparent parts. My current guess it that, within the render buffer, the texture is "merged" with a kind of "grey background". In this case, it obviously impacts the R,G,B color components of transparent pixels. I've yet to find a way around this. Even manually assigning alpha after the rendering process doesn't save the day for semi-transparent pixels, which RGB are "tainted" by the grey background.

    Read the article

  • Error when updating enumerated value?

    - by igrad
    Once upon a time, there was a Player class (simplified version) enum animState{RUNNING,JUMPING,FALLING,IDLING}; class Player { public: Player(int x, int y); void handle(); void show(); ~Player(); private: int m_x; int m_y; animState playerAnimState; } There was also a "handle" function-member, which took care of all movement and collisions for the player: #include "player.h" void Player::handle() { if(/*Player presses 'D' key*/) { m_x++; playerAnimState = RUNNING; } //Other stuff that is just there to look nice Through lots of experimentation with "//" and "/**/", I've found that I consistently get an error at "playerAnimState = RUNNING." Have I broken some enumeration rule? Does my laptop really suck that bad? I hate to post a "fix my code for me" question, but I'm not very seasoned with enums.

    Read the article

  • Toggle Fullscreen at Runtime

    - by sharethis
    Using the library GLFW, I can create a fullscreen window using this line of code. glfwOpenWindow(Width, Height, 8, 8, 8, 8, 24, 0, GLFW_FULLSCREEN); The line for creating a standard window looks like this. glfwOpenWindow(Width, Height, 8, 8, 8, 8, 24, 0, GLFW_WINDOW); What I want to do is letting the user switch between standard window and fullscreen by a keypress, let's say F11. It there a common practice of toggling fullscreen mode? What do I have to consider?

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >