Search Results

Search found 2513 results on 101 pages for 'opengl'.

Page 72/101 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • When to call glEnable(GL_FRAMEBUFFER_SRGB)?

    - by Steven Lu
    I have a rendering system where I draw to an FBO with a multisampled renderbuffer, then blit it to another FBO with a texture in order to resolve the samples in order to read off the texture to perform post-processing shading while drawing to the backbuffer (FBO index 0). Now I'd like to get some correct sRGB output... The problem is the behavior of the program is rather inconsistent between when I run it on OS X and Windows and this also changes depending on the machine: On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity but on my other machine with a Nvidia GTX 670 it does. On the Intel HD 3000 in OS X it will also apply it. So this probably means that I'm not setting my GL_FRAMEBUFFER_SRGB enable state at the right points in the program. However I can't seem to find any tutorials that actually tell me when I ought to enable it, they only ever mention that it's dead easy and comes at no performance cost. I am currently not loading in any textures so I haven't had a need to deal with linearizing their colors yet. To force the program to not simply spit back out the linear color values, what I have tried is simply comment out my glDisable(GL_FRAMEBUFFER_SRGB) line, which effectively means this setting is enabled for the entire pipeline, and I actually redundantly force it back on every frame. I don't know if this is correct or not. It certainly does apply a nonlinearization to the colors but I can't tell if this is getting applied twice (which would be bad). It could apply the gamma as I render to my first FBO. It could do it when I blit the first FBO to the second FBO. Why not? I've gone so far as to take screen shots of my final frame and compare raw pixel color values to the colors I set them to in the program: I set the input color to RGB(1,2,3) and the output is RGB(13,22,28). That seems like quite a lot of color compression at the low end and leads me to question if the gamma is getting applied multiple times. I have just now gone through the sRGB equation and I can verify that the conversion seems to be only applied once as linear 1/255, 2/255, and 3/255 do indeed map to sRGB 13/255, 22/255, and 28/255 using the equation 1.055*C^(1/2.4)+0.055. Given that the expansion is so large for these low color values it really should be obvious if the sRGB color transform is getting applied more than once. So, I still haven't determined what the right thing to do is. does glEnable(GL_FRAMEBUFFER_SRGB) only apply to the final framebuffer values, in which case I can just set this during my GL init routine and forget about it hereafter?

    Read the article

  • How to get right values from Views touch event

    - by Pinker
    I have a problem with implementing touch events on GLSurfaceView. Views size is 1280x696, because of android (tablet) status bar at bottom with soft keys, time etc.., (screen resolution is 1280x800), but OnTouchListener is receiving touch events with coords [646.0,739.0], and thus my gluunproject method fails to return correct values is there any way to return events that respect these boundaries? or how should I recalculate the position?

    Read the article

  • glGenBuffers not defined?

    - by user146780
    I'm using windows and I notice that a lot of functions are grayed out because I guess #ifdef GL_GLEXT_PROTOTYPES is not defined. One of these is the VBO extension. Should I just define GL_GLEXT_PROTOTYPES? Otherwise how else can I use VBOs since im using OpenGL32.dll (I want my application to have no dll dependencies not included in Windows by default.) Thanks

    Read the article

  • lwjgl 101: How can I use VBOs?

    - by Vuntic
    How can I draw anything in lwjgl using VBOs? When I follow the tutorial, it just breaks. I've also tried running this example (with the byteorder fix) but it just displays a blank window. SO hasn't been helpful to me yet, but this is the last place I can think of that might have an answer...

    Read the article

  • Set Renderbuffer Width and Height (Open GL ES)

    - by Josh Elsasser
    I'm currently experiencing an issue with an Open GL ES renderbuffer where the backing and width are are both set to 15. Is there any way to set them to the width of 320 and 480? My project is built up on Apple's EAGLView class and ES1Renderer, but I've moved it from the app delegate to a controller. I also moved the CADisplayLink outside of it (I update my game logic with the timestamp from this) Any help would be greatly appreciated. I add the glview to the window as follows: CGRect applicationFrame = [[UIScreen mainScreen] applicationFrame]; [window addSubview:gameController.glview]; [window makeKeyAndVisible]; I synthesize the controller and the glview within it. The EAGLView and Renderer are otherwise unmodified. Renderer Initialization: // Get the layer CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = TRUE; eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:FALSE], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; renderer = [[ES1Renderer alloc] init]; Render "resize from layer" Method - (BOOL)resizeFromLayer:(CAEAGLLayer *)layer { // Allocate color buffer backing based on the current layer size glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer); [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer]; glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth); glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight); NSLog(@"Backing Width:%i and Height: %i", backingWidth, backingHeight); if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"Failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)); return NO; } return YES; }

    Read the article

  • From a Maya scene to a WebGL animation, where to start?

    - by Tower
    Hi, I've got some time, and I really would like to learn to get my Maya animated scenes into WebGL. I'm not sure where to start really. It would be amazing if I could make a Canvas element and place a Maya scene into it so that it's animating. Does anyone got a tutorial or some hints? PS. Answers about 3ds Max are also welcome!

    Read the article

  • Renderbuffer Width (Open GL ES)

    - by Josh Elsasser
    I'm currently experiencing an issue with an Open GL ES renderbuffer where the backing and width are are both set to 15. Is there any way to set them to the width of 320 and 480? My project is built up on Apple's EAGLView class and ES1Renderer, but I've moved it from the app delegate to a controller. I also moved the CADisplayLink outside of it (I update my game logic with the timestamp from this) Any help would be greatly appreciated. I add the glview to the window as follows: CGRect applicationFrame = [[UIScreen mainScreen] applicationFrame]; [window addSubview:gameController.glview]; [window makeKeyAndVisible]; I synthesize the controller and the glview within it. The EAGLView and Renderer are otherwise unmodified. Renderer Initialization: // Get the layer CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = TRUE; eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:FALSE], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; renderer = [[ES1Renderer alloc] init]; Render "resize from layer" Method - (BOOL)resizeFromLayer:(CAEAGLLayer *)layer { // Allocate color buffer backing based on the current layer size glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer); [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer]; glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth); glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight); NSLog(@"Backing Width:%i and Height: %i", backingWidth, backingHeight); if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"Failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)); return NO; } return YES; }

    Read the article

  • my first shader in WebGL

    - by Diego
    Hello, I am writing my first shader in WebGL. I was wondering if the GLSL language has any way to evaluate if an attribute or a uniform is null. According to the specs it does not support to do something like if (attributeX) { dothis(); } else{ dothat(): } And I think it would be a waste to write a bool attribute for each of these cases would be a waste. Another question: what happen during rendering when you don't pass along the uniforms or attribs to the shader? Thanks!

    Read the article

  • Texturing Issue when Overlaying images onto Camera preview SurfaceView

    - by Dervis Suleyman
    I am in the process of making an augmented reality application and I have successfully overlaid a 3d cube over a camera surface view. The Issue now is when I add a Texture the cube then flickers(with the texture on it) and disappears leaving the camera preview upon further research i discovered that the cube with the texture was disappearing behind the Camera preview. My Question is has anyone else had this issue and if yes what approach did you take to solve this. Heres another strange Issue when i take a print screen the cube is on-top of the camera Preview

    Read the article

  • Filling in gaps for outlines

    - by user146780
    I'm using an algorithm to generate quads. These become outlines. The algorithm is: void OGLENGINEFUNCTIONS::GenerateLinePoly(const std::vector<std::vector<GLdouble>> &input, std::vector<GLfloat> &output, int width) { output.clear(); if(input.size() < 2) { return; } int temp; float dirlen; float perplen; POINTFLOAT start; POINTFLOAT end; POINTFLOAT dir; POINTFLOAT ndir; POINTFLOAT perp; POINTFLOAT nperp; POINTFLOAT perpoffset; POINTFLOAT diroffset; POINTFLOAT p0, p1, p2, p3; for(unsigned int i = 0; i < input.size() - 1; ++i) { start.x = static_cast<float>(input[i][0]); start.y = static_cast<float>(input[i][1]); end.x = static_cast<float>(input[i + 1][0]); end.y = static_cast<float>(input[i + 1][1]); dir.x = end.x - start.x; dir.y = end.y - start.y; dirlen = sqrt((dir.x * dir.x) + (dir.y * dir.y)); ndir.x = static_cast<float>(dir.x * 1.0 / dirlen); ndir.y = static_cast<float>(dir.y * 1.0 / dirlen); perp.x = dir.y; perp.y = -dir.x; perplen = sqrt((perp.x * perp.x) + (perp.y * perp.y)); nperp.x = static_cast<float>(perp.x * 1.0 / perplen); nperp.y = static_cast<float>(perp.y * 1.0 / perplen); perpoffset.x = static_cast<float>(nperp.x * width * 0.5); perpoffset.y = static_cast<float>(nperp.y * width * 0.5); diroffset.x = static_cast<float>(ndir.x * 0 * 0.5); diroffset.y = static_cast<float>(ndir.y * 0 * 0.5); // p0 = start + perpoffset - diroffset //p1 = start - perpoffset - diroffset //p2 = end + perpoffset + diroffset // p3 = end - perpoffset + diroffset p0.x = start.x + perpoffset.x - diroffset.x; p0.y = start.y + perpoffset.y - diroffset.y; p1.x = start.x - perpoffset.x - diroffset.x; p1.y = start.y - perpoffset.y - diroffset.y; p2.x = end.x + perpoffset.x + diroffset.x; p2.y = end.y + perpoffset.y + diroffset.y; p3.x = end.x - perpoffset.x + diroffset.x; p3.y = end.y - perpoffset.y + diroffset.y; output.push_back(p2.x); output.push_back(p2.y); output.push_back(p0.x); output.push_back(p0.y); output.push_back(p1.x); output.push_back(p1.y); output.push_back(p3.x); output.push_back(p3.y); } } The problem is that there are then gaps as seen here: http://img816.imageshack.us/img816/2882/eeekkk.png There must be a way to fix this. I see a pattern but I just cant figure it out. There must be a way to fill the missing inbetweens. Thanks

    Read the article

  • How do games move around objects (in general)

    - by user146780
    I'm sure there's not just 1 answer to this but, do game engines actually change the vectors in memory, or use gltransformations? Because pushing and popping the matrix all the time seems inefficient, but if you keep modifying the verticies you cant make use of display lists. So I'm wondering how it's done in general. Thanks

    Read the article

  • Normals per index?

    - by WarrenFaith
    I have a pyramid which has 5 vertex and 18 indices. As I want to add normals to each face I just found solution for normals for each vertex. That means I can't use indices to define my pyramid I need to have 18 vertex (and 3 times the same vertex for the same point in space). There must be a solution to use normals not on vertex base but on index base. Some code (javascript): var vertices = [ -half, -half, half, // 0 front left half, -half, half, // 1 front right half, -half, -half, // 2 back right -half, -half, -half, // 3 back left 0.0, Math.sqrt((size * size) - (2 * (half * half))) - half, 0.0 // 4 top ]; var vertexNormals = [ // front face normaleFront[0], normaleFront[1], normaleFront[2], normaleFront[0], normaleFront[1], normaleFront[2], normaleFront[0], normaleFront[1], normaleFront[2], // back face normaleBack[0], normaleBack[1], normaleBack[2], normaleBack[0], normaleBack[1], normaleBack[2], normaleBack[0], normaleBack[1], normaleBack[2], // left face normaleLeft[0], normaleLeft[1], normaleLeft[2], normaleLeft[0], normaleLeft[1], normaleLeft[2], normaleLeft[0], normaleLeft[1], normaleLeft[2], // right face normaleRight[0], normaleRight[1], normaleRight[2], normaleRight[0], normaleRight[1], normaleRight[2], normaleRight[0], normaleRight[1], normaleRight[2], // bottom face 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, ]; var pyramidVertexIndices = [ 0, 1, 4, // Front face 2, 3, 4, // Back face 3, 0, 4, // Left face 1, 2, 4, // Right face 0, 1, 2, 2, 3, 0, // Bottom face ];

    Read the article

  • Rendering splash screen on the iPhone using Open GL ES

    - by Rich
    Hi, I want to render a splash screen on the iPhone whilst using an Open GL view. The iPhone screen as we know is 320x480, which is not a power of 2. Before I enter into the world of chopping the texture up and rendering sub parts, or embedding the screen on another texture page I was wondering if there was another way? Is it possible to overlay another view that I could render to using CoreGraphics functions? Or is it possible to render to a Open GL surface using Core Graphics functions. What would you recommend? Cheers Rich

    Read the article

  • Why is it so hard to find a C++ 3d game tutorial

    - by Dave
    I'm planning on learning 3d game development for the iphone using a 3d engine, but because of lack of tutorials for the iphone I was planning on using C++ game tutorials and making the necessary changes. The problem is that I've had limited success when searching for things such as 'c++ 3d fps tutorial ' I dont really get anything useful. Are there any 3d c++ tutorials you can recommend?

    Read the article

  • Weird stuttering issues not related to GC.

    - by Smills
    I am getting some odd stuttering issues with my game even though my FPS never seems to drop below 30. About every 5 seconds my game stutters. I was originally getting stuttering every 1-2 seconds due to my garbage collection issues, but I have sorted those and will often go 15-20 seconds without a garbage collection. Despite this, my game still stutters periodically even when there is no GC listed in logcat anywhere near the stutter. Even when I take out most of my code and simply make my "physics" code the below code I get this weird slowdown issue. I feel that I am missing something or overlooking something. Shouldn't that "elapsed" code that I put in stop any variance in the speed of the main character related to changes in FPS? Any input/theories would be awesome. Physics: private void updatePhysics() { //get current time long now = System.currentTimeMillis(); //added this to see if I could speed it up, it made no difference Thread myThread = Thread.currentThread(); myThread.setPriority(Thread.MAX_PRIORITY); //work out elapsed time since last frame in seconds double elapsed = (now - mLastTime2) / 1000.0; mLastTime2 = now; //measures FPS and displays in logcat once every 30 frames fps+=1/elapsed; fpscount+=1; if (fpscount==30) { fps=fps/fpscount; Log.i("myActivity","FPS: "+fps+" Touch: "+touch); fpscount=0; } //this should make the main character (theoretically) move upwards at a steady pace mY-=100*elapsed; //increase amount I translate the draw to = main characters Y //location if the main character goes upwards if (mY<=viewY) { viewY=mY; } }

    Read the article

  • Simulating brush strokes for painting application

    - by DrRobot
    I'm trying to write an application that can be used to create pictures that look like paintings using simulated brush strokes. Are there any good sources for simple ways of simulating brush strokes? For example, given a list of mouse positions that the user has dragged the mouse through, a brush width and a brush texture, how do I determine what to draw to the canvas? I've tried angling the brush texture in the direction of the mouse movement and dabbing several brush texture images along the path, but it doesn't look great. I think I'm missing something where the brush texture should shrink and grow on corners. Any simple to follow links would be appreciated. I've found complex academic papers on simulating e.g. oil paints but I just want a basic algorithm to use that produces OK results if possible.

    Read the article

  • Resizing a GLContext in Windows

    - by user146780
    I have a Win32 window and it has a gl context. I'm trying to figure out how to resize the context because even if I use glviewport it still stays the same size as when I created the context. Am I misunderstanding glviewport? i'm providing it the bottom left corner and the size I want but its still the original size that I made the context. Thanks

    Read the article

  • gluLookAt alternative doesn't work

    - by Brammie
    Hey guys. I'm trying to calculate a lookat matrix myself, instead of using gluLookAt(). My problem is that my matrix doesn't work. using the same parameters on gluLookAt does work however. my way of creating a lookat matrix: Vector3 Eye, At, Up; //these should be parameters =) Vector3 zaxis = At - Eye; zaxis.Normalize(); Vector3 xaxis = Vector3::Cross(Up, zaxis); xaxis.Normalize(); Vector3 yaxis = Vector3::Cross(zaxis, xaxis); yaxis.Normalize(); float r[16] = { xaxis.x, yaxis.x, zaxis.x, 0, xaxis.y, yaxis.y, zaxis.y, 0, xaxis.z, yaxis.z, zaxis.z, 0, 0, 0, 0, 1, }; Matrix Rotation; memcpy(Rotation.values, r, sizeof(r)); float t[16] = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, -Eye.x, -Eye.y, -Eye.z, 1, }; Matrix Translation; memcpy(Translation.values, t, sizeof(t)); View = Rotation * Translation; // i tried reversing this as well (translation*rotation) now, when i try to use this matrix be calling glMultMatrixf, nothing shows up in my engine, while using the same eye, lookat and up values on gluLookAt works perfect as i said before. glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glMultMatrixf(View); the problem must be in somewhere in the code i posted here, i know the problem is not in my Vector3/Matrix classes, because they work fine when creating a projection matrix.

    Read the article

  • glDrawArrays() slow on iPad?

    - by Nick
    Hey guys, I was wondering how to speed up my iPad application using OpenGLES 2.0. At the moment we have every drawable object draw itself with a call to glDrawArrays(). Blend mode is on, we really need it. Without disabling blendmode, how would we improve performance for this app? For instances, if we now draw 1 texture across the whole screen, the app only gets 15FPS, which is really slow I think? Are we doing something terribly wrong? Our drawing code (for each drawable), is as follows: - (void) draw { GLuint textureAvailable = 0; if(texture != nil){ textureAvailable = 1; } glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture.name); glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, vertices); glEnableVertexAttribArray(ATTRIB_VERTEX); glVertexAttribPointer(ATTRIB_COLOR, 4, GL_FLOAT, 1, 0, colorsWithMultipliedAlpha); glEnableVertexAttribArray(ATTRIB_COLOR); glVertexAttribPointer(ATTRIB_TEXTUREMAP, 2, GL_FLOAT, 1, 0, textureMapping); glEnableVertexAttribArray(ATTRIB_TEXTUREMAP); //Note that we are NOT using position.z here because that is only used to determine drawing order int *jnUniforms = JNOpenGLConstants::getInstance().uniforms; glUniform4f(jnUniforms[UNIFORM_TRANSLATE], position.x, position.y, 0.0, 0.0); glUniform4f(jnUniforms[UNIFORM_SCALE], scale.x, scale.y, 1.0, 1.0); glUniform1f(jnUniforms[UNIFORM_ROTATION], rotation); glUniform1i(jnUniforms[UNIFORM_TEXTURE_SAMPLE], 0); glUniform2f(jnUniforms[UNIFORM_TEXTURE_REPEAT], textureRepeat.x, textureRepeat.y); glUniform1i(jnUniforms[UNIFORM_TEXTURE_AVAILABLE], textureAvailable); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); }

    Read the article

  • GLSL: How to get pixel x,y,z world position?

    - by Rookie
    I want to adjust the colors depending on which xyz position they are in the world. I tried this in my fragment shader: vec4 pos = vec4(gl_FragCoord); // get pixel position but it seems that the z-coord is always towards my camera... how do i make the coords independent from my camera position/angle? Edit: if it matters, heres my vertex shader: gl_Position = ftransform(); Edit2: changed title, so i want world coords, not screen coords!

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >