Search Results

Search found 2513 results on 101 pages for 'opengl 3'.

Page 17/101 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • OpenGL: Textured Primitives + High Framerate

    - by James D
    Short version: What's the best practice going forward for efficiently rendering large numbers of independent texture-mapped, lighted 2D/3D primitives (circles, rects, etc.) in OpenGL? For example: a typical particle system using billboarded quads/triangles, point sprites, or whatever other technique, with blending. Because after reading this thread on the messiness of OpenGL versioning/deprecation I'm starting to have my doubts. My specific question is not the ABCs of displaying primitives in OpenGL, but rather how to do so efficiently in post-deprecation (or pre-deprecation) OpenGL, in a way that's going to be compatible with a wide range of commodity hardware and in a way that's not going to break or itself get deprecated, five years down the line. Thanks!

    Read the article

  • Where are run the Opengl commands?

    - by Lucas
    Hi, i'm programming a simple OpenGL program on a multi-core computer that has a GPU. The GPU is a simple GeForce with PhysX, CUDA and OpenGL 2.1 support. When i run this program, is the host CPU that executes OpenGL specific commands or the ones are directly transferred to the GPU ???

    Read the article

  • Checking OpenGL resource leaks

    - by kamziro
    So I have a rather large openGL program going, and checking for normal memory leaks (those by new and delete) is rather trivial -- just run it on valgrind. But what is the best way to check for potential opengl leaks? Is there an opengl utility that'll tell you how many resources (e.g framebuffers) are being used at the time, or such? Or is the only way to attach a counter to every glGenBlah and glDeleteBlah pairs?

    Read the article

  • Fullscreen texture iPhone OpenGL ES

    - by Ben Reeves
    I'm aware that OpenGL textures on the the iphone are required to be a power of 2, is this true of OpenGL 2.0 as well? If I have an image that is 320 x 480 in size and want to draw it full screen is there any possible way to do this with OpenGL. Thanks

    Read the article

  • OpenGL and layouts

    - by Hnefi
    I'm using OpenGL to render a game view in my android application. The game is turn based and I wish to add some buttons to the interface. I'd prefer to use standard Android widgets, structured in an XML-generated layout (or, if I have to, a hardcoded layout) and put the OpenGL view in its own window as part of that layout. So in regards to this, I have 3 questions: 1: Is such a thing possible? I've done a few half-hearted tries, but have had no luck so far. 2: Is such a thing advisable? Does it carry a significant performance penalty, for example, over using OpenGL-based homebrew widgetry? 3: Is it possible to pass particular arguments to instances created in XML layouts? For example, my current OpenGL view has three arguments in its constructor; is it somehow possible for me to invoke that particular constructor with particular parameters when it's part of a layout?

    Read the article

  • Android runs OpenGL ES 1.1 or 1.0?

    - by cjserio
    I'm developing a native app for Android and I'm trying to use functions such as glIsEnabled which appear to be only available in OpenGL ES 1.1. Google's docs claim that NDK 1.6R1 supports OpenGL ES v1.1 but the function call fails with "unimplemented Open GL ES API" and if i do a glGetString(GL_VERSION) it returns "OpenGL ES 1.0 CM" as the version. So if 1.1 is available, what do I have to link against to get it or what else do i need to change to get it?

    Read the article

  • What is the best way to debug OpenGL?

    - by dreamlax
    I find that a lot of the time, OpenGL will show you it failed by not drawing anything. I'm trying to find ways to debug OpenGL programs, by inspecting the transformation matrix stack and so on. What is the best way to debug OpenGL? If the code looks and feels like the vertices are in the right place, how can you be sure they are?

    Read the article

  • OpenGL multiple threads, variable handling [closed]

    - by toeplitz
    I have written an OpenGL program which runs in the following way: Main: - Initialize SDL - Create thread which has the OpenGL context: - Renderloop - Set camera (view) matrix with glUniform. - glDrawElements() .... etc. - Swapbuffers(); - Main SDL loop handling input events and such. - Update camera matrix of type glm::mat4. This is how I pass my camera object to the class that handles opengl. Camera *cam = new Camera(); gl.setCam(cam); where void setCam(Camera *camera) { this->camera = camera; } For rendering in the opengl context thread, this happens: glm::mat4 modelView = camera->view * model; glUniformMatrix4fv(shader->bindUniform("modelView"), 1, GL_FALSE, glm::value_ptr(modelView)); In the main program where my SDL and other things are handles I then recompute the view matrix. This his working fine without me using any mutex locks. Is this correct? On the other hand, I add objects to my scene by an "upload queue" and in this case I have to mutex lock my upload queue vector (vector class type) when adding items to it or else the program crashes. In summary: I recompute my matrix in a different thread and then use it in the opengl thread without any mutex lock. Why is this working? Edit: I think my question is similar to what was asked here: Should I lock a variable in one thread if I only need it's value in other threads, and why does it work if I don't?, only in my case it is even more simple with only one matrix being changed.

    Read the article

  • Want to display a 3D model on the iPhone: how to get started?

    - by JeremyReimer
    I want to display and rotate a single 3D model, preferably textured, on the iPhone. Doesn't have to zoom in and out, or have a background, or anything. I have the following: an iPhone a MacBook the iPhone SDK Blender My knowledge base: I can make 3D models in various 3D programs (I'm most comfortable with 3D Studio Max, which I once took a course on, but I've used others) General knowledge of procedural programming from years ago (QuickBasic - I'm old!) Beginner's knowledge of object-oriented programming from going through simple Java and C# tutorials (Head Start C# book and my wife's intro to OOP course that used Java) I have managed to display a 3D textured model and spin it using a tutorial in C# I got off the net (I didn't just copy and paste, I understand basically how it works) and the XNA game development library, using Visual Studio on Windows. What I do not know: Much about Objective C Anything about OpenGL or OpenGL ES, which the iPhone apparently uses Anything about XCode My main problem is that I don't know where to start! All the iPhone books I found seem to be about creating GUI applications, not OpenGL apps. I found an OpenGL book but I don't know how much, if any, applies to iPhone development. And I find the Objective C syntax somewhat confusing, with the weird nested method naming, things like "id" that don't make sense, and the scary thought that I have to do manual memory management. Where is the best place to start? I couldn't find any tutorials for this sort of thing, but maybe my Google-Fu is weak. Or maybe I should start with learning Objective C? I know of books like Aaron Hillgrass', but I've also read that they are outdated and much of the sample code doesn't work on the iPhone SDK, plus it seems geared towards the Model-View-Controller paradigm which doesn't seem that suited for 3D apps. Basically I'm confused about what my first steps should be.

    Read the article

  • Making a full-screen animation on Android? Should I use OPENGL?

    - by Roger Travis
    Say I need to make several full-screen animation that would consist of about 500+ frames each, similar to the TalkingTom app ( https://play.google.com/store/apps/details?id=com.outfit7.talkingtom2free ). Animation should be playing at a reasonable speed - supposedly not less, then 20fps - and pictures should be of a reasonable quality, not overly compressed. What method do you think should I use? So far I tried: storing each frame as a compressed JPEG before animation starts, loading each frame into a byteArray as the animation plays, decode corresponding byteArray into a bitmap and draw it on a surface view. Problem - speed is too low, usually about 5-10 FPS. I have thought of two other options. turning all animations into one movie file... but I guess there might be problems with starting, pausing and seeking to the exactly right frame... what do you think? another option I thought about was using OPENGL ( while I never worked with it before ), to play animation frame by frame. What do you think, would opengl be able to handle it? Thanks!

    Read the article

  • How do I draw a point sprite using OpenGL ES on Android?

    - by nbolton
    Edit: I'm using the GL enum, which is incorrect since it's not part of OpenGL ES (see my answer). I should have used GL10, GL11 or GL20 instead. Here's a few snippets of what I have so far... void create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.getFileHandle("res/tiles2.png", FileType.Internal), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); } void render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); } void renderSprite() { int handle = tiles.getTextureObjectHandle(); Gdx.gl.glBindTexture(GL.GL_TEXTURE_2D, handle); Gdx.gl.glEnable(GL.GL_POINT_SPRITE); Gdx.gl11.glTexEnvi(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); renderer.begin(GL.GL_POINTS); renderer.vertex(pos.x, pos.y, pos.z); renderer.end(); } create() is called once when the program starts, and renderSprites() is called for each sprite (so, pos is unique to each sprite) where the sprites are arranged in a sort-of 3D cube. Unfortunately though, this just renders a few white dots... I suppose that the texture isn't being bound which is why I'm getting white dots. Also, when I draw my sprites on anything other than 0 z-axis, they do not appear -- I read that I need to crease my zfar and znear, but I have no idea how to do this using libgdx (perhaps it's because I'm using ortho projection? What do I use instead?). I know that the texture is usable, since I was able to render it using a SpriteBatch, but I guess I'm not using it properly with OpenGL.

    Read the article

  • Hardware advice for bitmap / openGL image processing server?

    - by pdizz
    I am trying to work out a build for a processing server to handle bitmap processing as well as openGL rendering for chroma-keying images and Photoshop automation. My searches here and on Google have turned up surprisingly few results, and seeing that there aren't tags for bitmap or image processing I take it this is a specialized application. The bitmap processing is very cpu-intensive while the chroma-keying and Photoshop stuff is gpu-intensive. I doubt this is a case of over-optimization as our company batches thousands of images a day (currently on individual workstations) and any saving in processing time and workstation down-time would be beneficial. Does anyone have any experience with this type of processing server? Any special considerations that would go into a build like this or am I over-thinking it?

    Read the article

  • gnome shell with very high CPU usage

    - by 501 - not implemented
    i'm running ubuntu gnome 13.10 on my dell latiude e6510 with a i5 m560. The I5 comes with a embedded Intel HD 3400 Graphics. The average cpu usage of the gnome-shell is by 160% it's to high, I think. Is there a problem with a driver? If i call the command glxinfo | grep OpenGL it returns: OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.3, 128 bits) OpenGL version string: 2.1 Mesa 9.2.1 OpenGL shading language version string: 1.30 OpenGL extensions: Greetings

    Read the article

  • how to enable opengl 2.0 and webgl on gma 3150 on ubuntu?

    - by mahmoudelbadry
    hi, i have a dell mini 1012 which an intel n450 processor and gma 3150 integrated graphics card running ubuntu 10.10 according to the intel website the graphics card support opengl 2.0 http://software.intel.com/en-us/arti...ed-graphics/#9 but when i type glxinfo in terminal the opengl version string gives me the following OpenGL version string: 1.4 Mesa 7.9-devel i installed the latest drivers but it didn't work so, how can i enable opengl 2.0 on this card?? thanks

    Read the article

  • Mesa library vs Hardware accelerated OpenGL for my executable - it's just a linking problem?

    - by user827992
    Supposing that i have my program that is targeting a specific OpenGL version, let's say the 3.0, now i want to produce an executable that will support the software rendering with Mesa and another executable that will support the Hardware accelerated context, i can use the same source code for both without expecting any issues ? In another words, the instrunctions in this libraries are the same for my linking purpose ?

    Read the article

  • what tool should I use for drawing 2d OpenGL shapes?

    - by Kenny Winker
    I'm working on a very simple OpenGL ES 2.0 game, and I'm not sure what tool to use to create the vertex data I need. My first thought was adobe illustrator, but I can't seem to google up any info on how to convert an .ai file to vertices. I'm only going to be using very simple 2d shapes so I wonder if I need to use a 3d modelling program? How is this typically done, when you are working with 2d non-sprite shapes?

    Read the article

  • How to implement explosion in OpenGL with a particle effect?

    - by Chan
    I'm relatively new to OpenGL and I'm clueless how to implement explosion. So could anyone give me some ideas how to start? Suppose the explosion occurs at location $(x, y, z)$, then I'm thinking of randomly generate a collection of vectors with $(x, y, z)$ as origin, then draw some particle (glutSolidCube) which move along this vector for some period of time, says after 1000 updates, it disappear. Is this approach feasible? A minimal example would be greatly appreciated.

    Read the article

  • Complete Math Library for use in OpenGL ES 2.0 Game?

    - by Bunkai.Satori
    Are you aware of a complete (or almost complete) cross platform math library for use in OpenGL ES 2.0 games? The library should contain: Matrix2x2, Matrix 3x3, Matrix4x4 classes Quaternions Vector2, Vector3, Vector4 Classes Euler Angle Class Operations amongh the above mentioned classes, conversions, etc.. Standardly used math operations in 3D graphics (Dot Product, Cross Product, SLERP, etc...) Is there such Math API available either standalone or as a part of any package? Programming Language: Visual C++ but planned to be ported to OS X and Android OS.

    Read the article

  • Uninstall libgl? Program using libgl instead of Nvidia's OpenGL libraries

    - by Tek
    I'm running Ubuntu 12.10 64-bit. I'm trying to get valve's new Linux Steam client running, but having a bit of trouble getting it to detect the right libraries or at least I think that's the issue. I installed the latest 310.19 Nvidia proprietary drivers from geforce.com. Here's the screenshot: I tried uninstalling xserver-xorg-video-nouveau thinking this was the problem but I'm still getting the same error. Any ideas what I can do for Steam to detect the right Nvidia OpenGL libraries?

    Read the article

  • How effects found in "Autodesk Fluid FX" are implemented using OpenGL ES?

    - by afds
    How this kind of effects are technically implemented using OpenGL ES? Are they performing simulation on GPU (using Shaders) or CPU while using some smart vertex positioning and texturing? Why it appears so fast (in terms of performance)? You might check the video of that app here: http://www.youtube.com/watch?v=F4KOk6QP6kQ edit Here is the presentation for the app: http://www.futuregameon.com/FGO2010_JosStam.pdf

    Read the article

  • Loading .png image from array of uint8_t into OpenGL ES texture

    - by unknownthreat
    Normally, when we want to load a texture for OpenGL ES with .png, we simply add the .png images into XCode. The .png files will be altered for optimization by XCode and these altered png files can loaded into OpenGL ES texture during the runtime. However, what I am trying to do is quite different. I am trying to load a .png file that is not from the prebuilt/compile. The png file will be transmitted externally from UDP, and it will be in the form of array of bytes. I am very sure that the png is transferred correctly, but when it comes to displaying the png image in the form of the OpenGL ES texture, the image somehow shows incorrectly. The colors that are being sent are presented but the positions are somehow very incorrect. However, the position of the colors still retain some aspects of the original position. Here: The left image shows the original .png, while the right shows the png being displayed on iPhone using OpenGL ES Texture. It looks more like the png data is not being decoded or incorrectly processed. Below is OpenGL ES code for turning the image into texture: - (void) setTextureFromImageByte: (uint8_t*)imageByte{ if (self = [super init]){ NSData* imageData = [[NSData alloc] initWithBytes: imageByte length: imageLength]; UIImage* img = [[UIImage alloc] initWithData: imageData]; CGImageRef image = img.CGImage; int width = 512; int height = 512; if (image){ int tempWidth = (int)width, tempHeight = (int)height; if ((tempWidth & (tempWidth - 1)) != 0 ){ NSLog(@"CAUTION! width is not power of 2. width == %d", tempWidth); }else if ((tempHeight & (tempHeight - 1)) != 0 ){ NSLog(@"CAUTION! height is not power of 2. height == %d", tempHeight); }else{ void *spriteData = calloc(width * 4, height * 4); CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4, CGImageGetColorSpace(image), kCGImageAlphaPremultipliedLast); CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, width, height), image); CGContextRelease(spriteContext); glBindTexture(GL_TEXTURE_2D, 1); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 320, 435, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); } }else NSLog(@"ERROR: Image not loaded..."); [img release]; [imageData release]; } } So does anyone knows how to deal with this? Is it because of iPhone only accepts altered png from XCode? What can we do in this case in order to make the png image be able to display correctly?

    Read the article

  • OpenGL Coordinate system confusion

    - by user146780
    Maybe I set up GLUT wrong. Basically I want verticies to be reletive to their size in pixels. Ex:right now if I create a hexagon, it hakes up the whole screen even though the units are 6. #include <iostream> #include <stdlib.h> //Needed for "exit" function #include <cmath> //Include OpenGL header files, so that we can use OpenGL #ifdef __APPLE__ #include <OpenGL/OpenGL.h> #include <GLUT/glut.h> #else #include <GL/glut.h> #endif using namespace std; //Called when a key is pressed void handleKeypress(unsigned char key, //The key that was pressed int x, int y) { //The current mouse coordinates switch (key) { case 27: //Escape key exit(0); //Exit the program } } //Initializes 3D rendering void initRendering() { //Makes 3D drawing work when something is in front of something else glEnable(GL_DEPTH_TEST); } //Called when the window is resized void handleResize(int w, int h) { //Tell OpenGL how to convert from coordinates to pixel values glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); //Switch to setting the camera perspective //Set the camera perspective glLoadIdentity(); //Reset the camera gluPerspective(45.0, //The camera angle (double)w / (double)h, //The width-to-height ratio 1.0, //The near z clipping coordinate 200.0); //The far z clipping coordinate } //Draws the 3D scene void drawScene() { //Clear information from last draw glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); //Reset the drawing perspective glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); glBegin(GL_POLYGON); //Begin quadrilateral coordinates //Trapezoid glColor3f(255,0,0); for(int i = 0; i < 6; ++i) { glVertex2d(sin(i/6.0*2* 3.1415), cos(i/6.0*2* 3.1415)); } glEnd(); //End quadrilateral coordinates glutSwapBuffers(); //Send the 3D scene to the screen } int main(int argc, char** argv) { //Initialize GLUT glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH); glutInitWindowSize(400, 400); //Set the window size //Create the window glutCreateWindow("Basic Shapes - videotutorialsrock.com"); initRendering(); //Initialize rendering //Set handler functions for drawing, keypresses, and window resizes glutDisplayFunc(drawScene); glutKeyboardFunc(handleKeypress); glutReshapeFunc(handleResize); glutMainLoop(); //Start the main loop. glutMainLoop doesn't return. return 0; //This line is never reached } How can I make it so that a polygon of 0,0 10,0 10,10 0,10 defines a polygon starting at the top left of the screen and is a width and height of 10 pixels? Thanks

    Read the article

  • Constant game speed independent of variable FPS in OpenGL with GLUT?

    - by Nazgulled
    I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one. After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article. First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it. GLUT Toolkit: GLUT is an OpenGL toolkit and helps with common tasks in OpenGL. The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration. The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once. The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene. The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load. The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now. I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it. Current Implementation: Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this: #define TICKS_PER_SECOND 30 #define MOVEMENT_SPEED 2.0f const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND; int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void processAnimationTimer(int value) { // setups the timer to be called again glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Requests to render a new frame (this will call my renderScene() once) glutPostRedisplay(); } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) // Setup the timer to be called one first time glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Read the current time since glutInit was called currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time. I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. But it's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes? I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow. Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right? How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT? I originally posted this question on Stack Overflow before being pointed out about this site. The following is a different approach I tried after creating the question in SO, so I'm posting it here too. Another Approach: I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance. My code has now turned into this: int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void renderScene(void) { (...) // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Setup the camera position and looking point SceneCamera.LookAt(); // All drawing code goes inside this function drawCompleteScene(); glutSwapBuffers(); /* Redraw the frame ONLY if the user is moving the camera (similar code will be needed to redraw the frame for other events) */ if(!IsTupleEmpty(cameraDirection)) { glutPostRedisplay(); } } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving. Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that? Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think. What do you think?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >