Search Results

Search found 3627 results on 146 pages for 'opengl es'.

Page 20/146 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Constant game speed independent of variable FPS in OpenGL with GLUT?

    - by Nazgulled
    I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one. After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article. First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it. GLUT Toolkit: GLUT is an OpenGL toolkit and helps with common tasks in OpenGL. The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration. The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once. The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene. The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load. The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now. I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it. Current Implementation: Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this: #define TICKS_PER_SECOND 30 #define MOVEMENT_SPEED 2.0f const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND; int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void processAnimationTimer(int value) { // setups the timer to be called again glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Requests to render a new frame (this will call my renderScene() once) glutPostRedisplay(); } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) // Setup the timer to be called one first time glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Read the current time since glutInit was called currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time. I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. But it's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes? I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow. Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right? How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT? I originally posted this question on Stack Overflow before being pointed out about this site. The following is a different approach I tried after creating the question in SO, so I'm posting it here too. Another Approach: I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance. My code has now turned into this: int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void renderScene(void) { (...) // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Setup the camera position and looking point SceneCamera.LookAt(); // All drawing code goes inside this function drawCompleteScene(); glutSwapBuffers(); /* Redraw the frame ONLY if the user is moving the camera (similar code will be needed to redraw the frame for other events) */ if(!IsTupleEmpty(cameraDirection)) { glutPostRedisplay(); } } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving. Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that? Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think. What do you think?

    Read the article

  • optimizing iPhone OpenGL ES fill rate

    - by NateS
    I have an Open GL ES game on the iPhone. My framerate is pretty sucky, ~20fps. Using the Xcode OpenGL ES performance tool on an iPhone 3G, it shows: Renderer Utilization: 95% to 99% Tiler Utilization: ~27% I am drawing a lot of pretty large images with a lot of blending. If I reduce the number of images drawn, framerates go from ~20 to ~40, though the performance tool results stay about the same (renderer still maxed). I think I'm being limited by the fill rate of the iPhone 3G, but I'm not sure. My questions are: How can I determine with more granularity where the bottleneck is? That is my biggest problem, I just don't know what is taking all the time. If it is fillrate, is there anything I do to improve it besides just drawing less? I am using texture atlases. I have tried to minimize image binds, though it isn't always possible (drawing order, not everything fits on one 1024x1024 texture, etc). Every frame I do 10 image binds. This seem pretty reasonable, but I could be mistaken. I'm using vertex arrays and glDrawArrays. I don't really have a lot of geometry. I can try to be more precise if needed. Each image is 2 triangles and I try to batch things were possible, though often (maybe half the time) images are drawn with individual glDrawArrays calls. Besides the images, I have ~60 triangles worth of geometry being rendered in ~6 glDrawArrays calls. I often glTranslate before calling glDrawArrays. Would it improve the framerate to switch to VBOs? I don't think it is a huge amount of geometry, but maybe it is faster for other reasons? Are there certain things to watch out for that could reduce performance? Eg, should I avoid glTranslate, glColor4g, etc? I'm using glScissor in a 3 places per frame. Each use consists of 2 glScissor calls, one to set it up, and one to reset it to what it was. I don't know if there is much of a performance impact here. If I used PVRTC would it be able to render faster? Currently all my images are GL_RGBA. I don't have memory issues. Here is a rough idea of what I'm drawing, in this order: 1) Switch to perspective matrix. 2) Draw a full screen background image 3) Draw a full screen image with translucency (this one has a scrolling texture). 4) Draw a few sprites. 5) Switch to ortho matrix. 6) Draw a few sprites. 7) Switch to perspective matrix. 8) Draw sprites and some other textured geometry. 9) Switch to ortho matrix. 10) Draw a few sprites (eg, game HUD). Steps 1-6 draw a bunch of background stuff. 8 draws most of the game content. 10 draws the HUD. As you can see, there are many layers, some of them full screen and some of the sprites are pretty large (1/4 of the screen). The layers use translucency, so I have to draw them in back-to-front order. This is further complicated by needing to draw various layers in ortho and others in perspective. I will gladly provide additional information if reqested. Thanks in advance for any performance tips or general advice on my problem!

    Read the article

  • Will I have an easier time learning OpenGL in Pygame or Pyglet? (NeHe tutorials downloaded)

    - by shadowprotocol
    I'm looking between PyGame and Pyglet, Pyglet seems to be somewhat newer and more Pythony, but it's last release according to Wikipedia is January '10. PyGame seems to have more documentation, more recent updates, and more published books/tutorials on the web for learning. I downloaded both the Pyglet and PyGame versions of the NeHe OpenGL tutorials (Lessons 1-10) which cover this material: lesson01 - Setting up the window lesson02 - Polygons lesson03 - Adding color lesson04 - Rotation lesson05 - 3D lesson06 - Textures lesson07 - Filters, Lighting, input lesson08 - Blending (transparency) lesson09 - 2D Sprites in 3D lesson10 - Moving in a 3D world What do you guys think? Is my hunch that I'll be better off working with PyGame somewhat warranted?

    Read the article

  • Is there a global "low resolution" filter for OpenGL?

    - by Ian Henry
    I'm trying to learn a little about OpenGL, so I'm making a simple 2D game (with OpenTK), and so far it's coming along well. I thought it would be fun to give it that, for lack of a better word, retropixelated look of games from the early nineties. I figured it would be an easy thing to do -- simply draw everything at half its normal size and scale up with no anti-aliasing. But I can't find any resources on how to do this. I can set the min/mag filters of my textures to nearest and that works fine for my sprites, but I'm using lots of primitives and I'd like the effect to apply to them as well. The one idea I had was to draw everything at half size, then somehow copy the render buffer to a texture, then render that texture full-size, but I don't know how to do that, and it seems like there must be a better way. Can anyone help me out?

    Read the article

  • Workaround the flip queue (AKA pre-rendered frames) in OpenGL?

    - by user41500
    It appears that some drivers implement a "flip queue" such that, even with vsync enabled, the first few calls to swap buffers return immediately (queuing those frames for later use). It is only after this queue is filled that buffer swaps will block to synchronize with vblank. This behavior is detrimental to my application. It creates latency. Does anyone know of a way to disable it or a workaround for dealing with it? The OpenGL Wiki on Swap Interval suggests a call to glFinish after the swap but I've had no such luck with that trick.

    Read the article

  • How do I render only part of a texture to a point sprite in OpenGL ES for Android?

    - by nbolton
    Using the libgdx framework, I've figured out how to render a texture to a point sprite. The problem is, it renders the entire texture to the point sprite, where I only want a small part of it (since it's an isometric tile image). Here's a snippet from some demo code I wrote... create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.internal("data/tiles2.png"), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); Gdx.gl.glEnable(GL10.GL_TEXTURE_2D); Gdx.gl.glEnable(GL11.GL_POINT_SPRITE_OES); Gdx.gl11.glTexEnvi( GL11.GL_POINT_SPRITE_OES, GL11.GL_COORD_REPLACE_OES, GL11.GL_TRUE); Gdx.gl10.glPointSize(s); tiles.bind(); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); renderer.begin(GL10.GL_POINTS); // render 3 point sprites at various 3d points renderer.vertex(-.1f, 0, -.1f); renderer.vertex(0, 0, 0); renderer.vertex(.1f, 0, .1f); // ... more vertices here if needed (one for each sprite) ... renderer.end(); }

    Read the article

  • Linux OpenGL programming, should I use GLX or any other?

    - by pahnin
    I'm new to OpenGL and found that there are a lot of libraries to do that in C, and I also found that glx is most friendly with Linux X Server, I just want to do basic stuff, and I cannot find any tutorials for GLX. Is GLX a bad thing? I just want to do some small graphical things without installing many libraries and getting confused. Can anyone suggest me something which has tutorials and simple to compile? I found a link with an example with GLX and it worked perfect with no errors: anyone please suggest where I can find nice documentation or any better libraries.

    Read the article

  • How to draw unlimited FPS on Mac OS X with OpenGL?

    - by V1ru8
    I d'like to draw as many frames as possible with OpenGL on Mac OS X to measure the performance on different scenes. What I've tried so far: Using a CVDisplayLink that has NSOpenGLCPSwapInterval set to 0, so it does not sync with the Display. But with that it's still stuck at max 60FPS Using normal -drawRect: with a timer that fires 1/1000sec and calls -setNeedsDisplay: Still not more than 60FPS Same as 2. but I call -display in the timer callback. With that I get the FPS above 60, but it still stops at 100-110 FPS. Although the frame rate should easily be at 10times more. Andy idea how I can really draw as many frames as possible?

    Read the article

  • Triangles in a C++ STL Vector as an Objective-C member sometimes draws incorrectly in OpenGL ES

    - by Rahil627
    The polygons draw correctly 80% of the time. When it fails, a vertex is dislocated. The polygon is consistently drawn with the wrong vertex. I checked that the vector is correct during initialization, even when it's wrongly drawn. I'm using Cocos2d. The class member: @interface Polygon : CCSprite { std::vector<float> triangleVertices; } The draw function called in [Polygon draw]: + (void)drawTrianglesWithVertices:(const std::vector<float> &)v { //glEnableClientState(GL_VERTEX_ARRAY); glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_COLOR_ARRAY); glVertexPointer(2, GL_FLOAT, 0, &v[0]); glDrawArrays(GL_TRIANGLES, 0, v.size()); //glDisableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); } Any ideas?

    Read the article

  • Do I need to buy a graphics card with openGL? [closed]

    - by Greb
    I'm building my own computer right now, and I've found a very good combo deal on a graphics card with monitors. I will be using mostly Linux of course, but I'd like to be able to do some windows work as well. I will do some gaming, but mostly just graphical design with blender. So how important is it that my graphics card supports openGL? This is the deal i'm looking at: http://www.newegg.com/Product/ComboBundleDetails.aspx?ItemList=Combo.636041 Sorry if this isn't the correct forum for this. Please let me know if you know of someplace better I could ask this.

    Read the article

  • What is the relationship between OpenGL, GLX, DRI, and Mesa3D?

    - by user65308
    I am starting out doing some low-level 3D programming in Linux. I have a lot of experience using the higher level graphics API OpenInventor. I know it is not strictly necessary to be aware of how all these things fit together but I'm just curious. I know OpenGL is just a standard for graphics applications. Mesa3D seems to be an open source implementation of this standard. So where do GLX and DRI fit? Digging around on Wikipedia and all these websites, I've yet to find an explanation of exactly how it all goes together. Where does hardware acceleration happen? What do proprietary drivers have to do with this? Thanks!

    Read the article

  • What's wrong with this Open GL ES 2.0. Shader?

    - by Project Dumbo Dev
    I just can't understand this. The code works perfectly on the emulator(Which is supposed to give more problems than phones…), but when I try it on a LG-E610 it doesn't compile the vertex shader. This is my log error(Which contains the shader code as well): EDITED Shader: uniform mat4 u_Matrix; uniform int u_XSpritePos; uniform int u_YSpritePos; uniform float u_XDisplacement; uniform float u_YDisplacement; attribute vec4 a_Position; attribute vec2 a_TextureCoordinates; varying vec2 v_TextureCoordinates; void main(){ v_TextureCoordinates.x= (a_TextureCoordinates.x + u_XSpritePos) * u_XDisplacement; v_TextureCoordinates.y= (a_TextureCoordinates.y + u_YSpritePos) * u_YDisplacement; gl_Position = u_Matrix * a_Position; } Log reports this before loading/compiling shader: 11-05 18:46:25.579: D/memalloc(1649): /dev/pmem: Mapped buffer base:0x51984000 size:5570560 offset:4956160 fd:46 11-05 18:46:25.629: D/memalloc(1649): /dev/pmem: Mapped buffer base:0x5218d000 size:5836800 offset:5570560 fd:49 Maybe it has something to do with that men alloc? The phone is also giving a constant error while plugged: ERROR FBIOGET_ESDCHECKLOOP fail, from msm7627a.gralloc Edited: "InfoLog:" refers to glGetShaderInfoLog, and it's returning nothing. Since I removed the log in a previous edit I will just say i'm looking for feedback on compiling shaders. Solution + More questions: Ok, the problem seems to be that either ints are not working(generally speaking) or that you can't mix floats with ints. That brings to me the question, why on earth glGetShaderInfoLog is returning nothing? Shouldn't it tell me something is wrong on those lines? It surely does when I misspell something. I solved by turning everything into floats, but If someone can add some light into this, It would be appreciated. Thanks.

    Read the article

  • OpenGL view in an iPad splitview

    - by dc
    I'm attempting to add an OpenGL view (such as the one given in Apple's sample code) as the detail view of an iPad's splitview but am running into issues. I've taken the sample code from the base OpenGL project and attempted to add it as a subview of my DetailViewController - ie EAGLView *glview = [[EAGLView alloc] initWithFrame:CGRectMake(0,0,100,100)] but when I add it to the main view and call startAnimating on it, nothing at all happens. Any solutions to this? I have never worked with OpenGL before so perhaps I'm doing this all wrong.

    Read the article

  • OpenGL Wrapper in .Net

    - by Ngu Soon Hui
    This question is similar to the one here. But I feel that the answers recommended ( such as Tao and OpenTK) are not good enough because they are just a direct port from OpenGL, with no OOP design, and hard to use. What I'm looking for is a .Net OpenGL wrapper that is written in clear OOP principles, easy to use ( easy to apply textual and lighting, easy to debug etc), able to rotate the 3D diagram with mouse ( a feature that is critically missing from OpenGL and TAO), and the ability to export to other file formats ( such as dwg or dxf or Google Map file format). Any suggestion? Both Open source or commercial components would do.

    Read the article

  • iPhone OpenGL ES freezes for no reason

    - by KJ
    Hi, I'm quite new to iPhone OpenGL ES, and I'm really stuck. I was trying to implement shadow mapping on iPhone, and I allocated two 512*1024*32bit textures for the shadow map and the diffuse map respectively. The problem is that my application started to freeze and reboot the device after I added the shadow map allocation part to the code (so I guess the shadow map allocation is causing all this mess). It happens randomly, but mostly within 10 minutes. (sometimes within a few secs) And it only happens on the real iPhone device, not on the virtual device. I backtracked the problem by removing irrelevant code lines by lines and now my code is really simple, but it's still crashing (I mean, freezing). Could anybody please download my xcode project linked below and see what on earth is wrong? The code is really simple: http://www.tempfiles.net/download/201004/95922/CrashTest.html I would really appreciate if someone can help me. My iPhone is a 3GS and running on the OS version 3.1. Again, run the code and it'll take about 5 mins in average for the device to freeze and reboot. (Don't worry, it does no harm) It'll just display cyan screen before it freezes, but you'll be able to notice when it happens because the device will reboot soon, so please be patient. Just in case you can't reproduce the problem, please let me know. (That could possibly mean it's specifically my device that something's wrong with) Observation: The problem goes away when I change the size of the shadow map to 512*512. (but with the diffuse map still 512*1024) I'm desperate for help, thanks in advance! Just for the people's information who can't download the link, here is the OpenGL code: #import "GLView.h" #import <OpenGLES/ES2/glext.h> #import <QuartzCore/QuartzCore.h> @implementation GLView + (Class)layerClass { return [CAEAGLLayer class]; } - (id)initWithCoder: (NSCoder*)coder { if ((self = [super initWithCoder:coder])) { CAEAGLLayer* layer = (CAEAGLLayer*)self.layer; layer.opaque = YES; layer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool: NO], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; displayLink_ = nil; context_ = [[EAGLContext alloc] initWithAPI: kEAGLRenderingAPIOpenGLES2]; if (!context_ || ![EAGLContext setCurrentContext: context_]) { [self release]; return nil; } glGenFramebuffers(1, &framebuffer_); glBindFramebuffer(GL_FRAMEBUFFER, framebuffer_); glViewport(0, 0, self.bounds.size.width, self.bounds.size.height); glGenRenderbuffers(1, &defaultColorBuffer_); glBindRenderbuffer(GL_RENDERBUFFER, defaultColorBuffer_); [context_ renderbufferStorage: GL_RENDERBUFFER fromDrawable: layer]; glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, defaultColorBuffer_); glGenTextures(1, &shadowColorBuffer_); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, shadowColorBuffer_); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 1024, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); glGenTextures(1, &texture_); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture_); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 1024, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); } return self; } - (void)startAnimation { displayLink_ = [CADisplayLink displayLinkWithTarget: self selector: @selector(drawView:)]; [displayLink_ setFrameInterval: 1]; [displayLink_ addToRunLoop: [NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; } - (void)useDefaultBuffers { glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, defaultColorBuffer_); glClearColor(0.0, 0.8, 0.8, 1); glClear(GL_COLOR_BUFFER_BIT); } - (void)useShadowBuffers { glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, shadowColorBuffer_, 0); glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT); } - (void)drawView: (id)sender { NSTimeInterval startTime = [NSDate timeIntervalSinceReferenceDate]; [EAGLContext setCurrentContext: context_]; [self useShadowBuffers]; [self useDefaultBuffers]; glBindRenderbuffer(GL_RENDERBUFFER, defaultColorBuffer_); [context_ presentRenderbuffer: GL_RENDERBUFFER]; NSTimeInterval endTime = [NSDate timeIntervalSinceReferenceDate]; NSLog(@"FPS : %.1f", 1 / (endTime - startTime)); } - (void)stopAnimation { [displayLink_ invalidate]; displayLink_ = nil; } - (void)dealloc { if (framebuffer_) glDeleteFramebuffers(1, &framebuffer_); if (defaultColorBuffer_) glDeleteRenderbuffers(1, &defaultColorBuffer_); if (shadowColorBuffer_) glDeleteTextures(1, &shadowColorBuffer_); glDeleteTextures(1, &texture_); if ([EAGLContext currentContext] == context_) [EAGLContext setCurrentContext: nil]; [context_ release]; context_ = nil; [super dealloc]; } @end

    Read the article

  • Java2D OpenGL Hardware Acceleration Doesn't Work

    - by Aaron
    It doesn't work with OpenGL with even the simplest of programs. Here is what I am doing.. java -Dsun.java2d.opengl=True -jar Java2Demo.jar (Java2Demo.jar is usually included with the JDK..) The text output is: OpenGL pipeline enabled for default config on screen 0 When I don't pass in the above VM argument things work fine (but slowly). When I do pass in the above argument nothing shows up... If I move the window around it captures whatever image it was on top of and jumbles it into nonsense. I'm running Windows XP Pro SP3 (Microsoft Windows XP [Version 5.1.2600]) (under Parallels on OS X 10.5.8) I used "Geeks3D GPU Caps Viewer" to tell me I have Open GL version: 2.0 NVIDIA-1.5.48 I have tried this with two version of the JVM. First: java version "1.6.0_13" Java(TM) SE Runtime Environment (build 1.6.0_13-b03) Java HotSpot(TM) Client VM (build 11.3-b02, mixed mode) and second: java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode, sharing)

    Read the article

  • How to programmatically alpha fade a textured object in OpenGL ES 1.1 (iPhone)

    - by PewterSoft
    I've been using OpenGL ES 1.1 on the iPhone for 10 months, and in that time there is one seemingly simple task I have been unable to do: programmatically fade a textured object. To keep it simple: how can I alpha fade, under code control, a simple 2D triangle that has a texture (with alpha) applied to it. I would like to fade it in/out while it is over a scene, not a simple colored background. So far the only technique I have to do this is to create a texture with multiple pre-faded copies of the texture on it. (Yuck) As an example, I am unable to do this using Apple's GLSprite sample code as a starting point. It already textures a quad with a texture that has its own alpha. I would like to fade that object in and out.

    Read the article

  • Loading textures in an Android OpenGL ES App.

    - by Omega
    I was wondering if anyone could advise on a good pattern for loading textures in an Android Java & OpenGL ES app. My first concern is determining how many texture names to allocate and how I can efficiently go about doing this prior to rendering my vertices. My second concern is in loading the textures, I have to infer the texture to be loaded based on my game data. This means I'll be playing around with strings, which I understand is something I really shouldn't be doing in my GL thread. Overall I understand what's happening when loading textures, I just want to get the best lifecycle out of it. Are there any other things I should be considering?

    Read the article

  • another question about OpenGL ES rendering to texture

    - by ensoreus
    Hello, pros and gurus! Here is another question about rendering to texture. The whole stuff is all about saving texture between passing image into different filters. Maybe all iPhone developers knows about Apple's sample code with OpenGL processing where they used GL filters(functions), but pass into them the same source image. I need to edit an image by passing it sequentelly with saving the state of the image to edit. I am very noob in OpenGL, so I spent increadibly a lot of to solve the issue. So, I desided to create 2 FBO's and attach source image and temporary image as a textures to render in. Here is my init routine: glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, (GLint *)&SystemFBO); glImage = [self loadTexture:preparedImage]; //source image for (int i = 0; i < 4; i++) { fullquad[i].s *= glImage->s; fullquad[i].t *= glImage->t; flipquad[i].s *= glImage->s; flipquad[i].t *= glImage->t; } tmpImage = [self loadEmptyTexture]; //editing image glGenFramebuffersOES(1, &tmpImageFBO); glBindFramebufferOES(GL_FRAMEBUFFER_OES, tmpImageFBO); glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, tmpImage->texID, 0); GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES); if(status != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"failed to make complete tmp framebuffer object %x", status); } glBindTexture(GL_TEXTURE_2D, 0); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); glGenRenderbuffersOES(1, &glImageFBO); glBindFramebufferOES(GL_FRAMEBUFFER_OES, glImageFBO); glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, glImage->texID, 0); status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) ; if(status != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"failed to make complete cur framebuffer object %x", status); } glBindTexture(GL_TEXTURE_2D, 0); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); When user drag the slider, this routine invokes to apply changes -(void)setContrast:(CGFloat)value{ contrast = value; if(flag!=mfContrast){ NSLog(@"contrast: dumped"); flag = mfContrast; glBindFramebufferOES(GL_FRAMEBUFFER_OES, glImageFBO); glClearColor(1,1,1,1); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(0, 512, 0, 512, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glScalef(512, 512, 1); glBindTexture(GL_TEXTURE_2D, tmpImage->texID); glViewport(0, 0, 512, 512); glVertexPointer(2, GL_FLOAT, sizeof(V2fT2f), &fullquad[0].x); glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &fullquad[0].s); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); } glBindFramebufferOES(GL_FRAMEBUFFER_OES,tmpImageFBO); glClearColor(0,0,0,1); glClear(GL_COLOR_BUFFER_BIT); glEnable(GL_TEXTURE_2D); glActiveTexture(GL_TEXTURE0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(0, 512, 0, 512, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glScalef(512, 512, 1); glBindTexture(GL_TEXTURE_2D, glImage->texID); glViewport(0, 0, 512, 512); [self contrastProc:fullquad value:contrast]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); [self redraw]; } Here are two cases: if it is the same filter(edit mode) to use, I bind tmpFBO to draw into tmpImage texture and edit glImage texture. contrastProc is a pure routine from Apples's sample. If it is another mode, than I save edited image by drawing tmpImage texture in source texture glImage, binded with glImageFBO. After that I call redraw: glBindFramebufferOES(GL_FRAMEBUFFER_OES, SystemFBO); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(0, kTexWidth, 0, kTexHeight, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glScalef(kTexWidth, kTexHeight, 1); glBindTexture(GL_TEXTURE_2D, glImage->texID); glViewport(0, 0, kTexWidth, kTexHeight); glVertexPointer(2, GL_FLOAT, sizeof(V2fT2f), &flipquad[0].x); glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &flipquad[0].s); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); And here it binds visual framebuffer and dispose glImage texture. So, the result is VERY aggresive filtering. Increasing contrast volume by just 0.2 brings image to state that comparable with 0.9 contrast volume in Apple's sample code project. I miss something obvious, I guess. Interesting, if I disabple line glBindTexture(GL_TEXTURE_2D, glImage->texID); in setContrast routine it brings no effect. At all. If I replace tmpImageFBO with SystemFBO to draw glImage directly on display(and disabling redraw invoking line), all works fine. Please, HELP ME!!! :(

    Read the article

  • Concept: Mapping irregular shapes (cartoons, sprites) to triangles in OpenGL ES

    - by Moshe
    I understand how mapping a triangle texture to a triangle works, but how do you map other things? I can't see myself mapping a circle onto a triangle. If it were a quad (square), I could see it happening, but why would a graphic not get warped on a triangle? EDIT: Bonus question: What are some good OpenGL ES tutorials online? Videos and articles count. (I've seen the Stanford University stuff on iTunes U and think it's excellent, but I want more.)

    Read the article

  • Using a different array for vertices and normals in glDrawElements (OpenGL/VBOs)

    - by Tuxer
    I'm currently programming a .obj loader in OpenGL. I store the vertex data in a VBO, then bind it using Vertex Attribs. Same for normals. Thing is, the normal data and vertex data aren't stored in the same order. The indices I give to glDrawElements to render the mesh are used, I suppose, by OpenGL to get vertices in the vertex VBO and to get normals in the normals VBO. Is there an opengl way, besides using glBegin/glVertex/glNormal/glEnd to tell glDrawElements to use an index for vertices and an other index for normals? Thanks

    Read the article

  • OpenGL "out of memory" on glReadPixels()

    - by spurserh
    Hello, I am running into an "out of memory" error from OpenGL on glReadPixels() under low-memory conditions. I am writing a plug-in to a program that has a robust heap mechanism for such situations, but I have no idea whether or how OpenGL could be made to use it for application memory management. The notion that this is even possible came to my attention through this [albeit dated] thread on a similar issue under Mac OS X: http://lists.apple.com/archives/Mac-opengl/2001/Sep/msg00042.html I am using Windows XP, and have seen it on multiple NVidia cards. I am also interested in any work-arounds I might be able to relay to users (the thread mentions "increasing virtual memory"). Thanks, Sean

    Read the article

  • GLUT multiple Windows and OpenGL context

    - by user3511595
    I would like to use GLUT 3.7 Window Toolkit in a program written in php because i saw that there was a binding in php. I am interesting in having multiple windows ! In order to have a clean code, i was wondering to separate on the one hand the window toolkit and on the other hand the OpenGL implementation. I hope that i can program event with the glut callbacks in php then the application get the events and can interacts ! But i don't know how to draw in each window from php with opengl ? They say in the man page that each glut window has an opengl context. How to get each context ? Can i render offscreen with each context ?

    Read the article

  • opengl es transparent fog in android

    - by Sponge
    I was wondering why the fog i use in opengl es on my android phone isn't transparent when i set the colors alpha to 0. I set the background to transparent and it works fine and the Color class or the toFloatBuffer() method are working fine for my meshes but when i set the fog color to transparent then this fact is ignored. here is the basic code i use for fog in the onSurfaceCreated() method of my renderer: gl.glFogf(GL10.GL_FOG_MODE, GL10.GL_LINEAR); gl.glFogf(GL10.GL_FOG_START, 4.0f); gl.glFogf(GL10.GL_FOG_END, 10.0f); gl.glFogfv(GL10.GL_FOG_COLOR, new Color(0,0,0,0).toFloatBuffer()); gl.glEnable(GL10.GL_FOG);

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >