Search Results

Search found 2513 results on 101 pages for 'opengl'.

Page 20/101 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Using OpenGL Without X-Window System

    - by user366250
    Hell every body , i am newbie in linux programming ( Not Windows ) . i want to know how i can using OpenGL on Linux Platform Without X-Window System , can i send OpenGL Graphics Directly to Framebuffer Device ?! There Is Project Named DirectFB ( Direct FrameBuffer ) . with DirectFB We can do this but DirectFB needs for driver for each hardware and i want to user a graphic card that only have linux driver . what is your suggestion to Me . Thanku.

    Read the article

  • How to set background in OpenGL captured image from OpenCV

    - by user325487
    Hey All, i'm relatively new to Artoolkitplus and openGL i'm having a tough time getting the image i capture through openCV to be set as the background image in OpenGL ... I also cannot convert the image i take through the camera using opencv to be scaled to 320x280 from 640x480 .. i also have to save my image and load if for things to work... here's my code //////////// int findMarker() { IplImage* image = cvQueryFrame( capture ); if( !capture ) { fprintf( stderr, "ERROR: capture is NULL \n" ); getchar(); return -1; } if( !image ) { fprintf( stderr, "ERROR: frame is null...\n" ); getchar(); } //cvShowImage( "Capture", frame ); //image = cvCloneImage( frame ); try{ if(!cvSaveImage("immagineTmp.jpg",image)) printf("Could not save\n"); } catch(void*) {} image = cvLoadImage("immagineTmp.jpg", 1); cvShowImage( "Image", image ); glLoadIdentity(); ////////////// glDisable(GL_DEPTH_TEST); glOrtho(0,640,0,480,-1,1); glGenTextures(1, &bgid); glBindTexture(GL_TEXTURE_2D, bgid); // Create Linear Filtered Texture glBindTexture(GL_TEXTURE_2D, bgid); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, 3, image-width, image-height, 0, GL_RGB, GL_UNSIGNED_BYTE, image-imageData); glBindTexture(GL_TEXTURE_2D, bgid); glBegin(GL_QUADS); glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.2f, -1.0f, -2.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.2f, -1.0f, -2.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.2f, 1.0f, -2.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.2f, 1.0f, -2.0f); glEnd(); glEnable(GL_DEPTH_TEST); glLoadIdentity(); //////////// // do the OpenGL camera setup glMatrixMode(GL_PROJECTION); glLoadMatrixf(tracker-getProjectionMatrix()); int markerId = tracker-calc((unsigned char *)(image-imageData)); float conf = tracker-getConfidence(); // use the result of calc() to setup the OpenGL transformation glMatrixMode(GL_MODELVIEW); glLoadMatrixf(tracker-getModelViewMatrix()); if(markerId!=-1) { printf("\n\nFound marker %d (confidence %d%%)\n\nPose-Matrix:\n ", markerId, (int(conf*100.0f))); for(int i=0; i<16; i++) printf("%.2f %s", tracker-getModelViewMatrix()[i], (i%4==3)?"\n " : ""); } cvReleaseImage(&image); return 0; }

    Read the article

  • OpenGL: Implementing transformation matrix stack

    - by Jakub M.
    In a newer OpenGL there is no matrix stack. I am working on a simple display engine, and I am going to implement the transformation stack. What is a common strategy here? Should I build a push/pop stack, and use it with a tree representing my model? I suppose this is the "old" approach, that was deprecated in the newer OpenGL versions. Maybe then it is not the best solution (it was removed for some reason)

    Read the article

  • How to fix OpenGL/SDL runtime error which is probobly caused by adding textures [closed]

    - by Arturs Lapins
    Hello I've recently worked with OpenGL and SDL and I was adding textures to my GL_QUADS and when I ran my program I came across with a runtime error. I've searched all over the internet for a fix but I couldn't find anything so I guess I had one more option. Asking here. So here is some of my code. int loadTexture(std::string fileName){ SDL_Surface *image=IMG_Load(fileName.c_str()); SDL_DisplayFormatAlpha(image); unsigned int id; glGenTextures(1,&id); glBindTexture(GL_TEXTURE_2D,&id); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,image->w,image >h,0,GL_RGBA,GL_UNSIGNED_BYTE,image->pixels); SDL_FreeSurface(image); return id; } That's my loadTexture function. void init() { glClearColor(0.0, 0.0, 0.0, 1.0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, 800.0 / 600.0, 1.0, 500.0); glMatrixMode(GL_MODELVIEW); glEnable(GL_DEPTH_TEST); glEnable(GL_TEXTURE_2D); tex=loadTexture("test.png"); } That's my init function for OpenGL. Btw I have declared my tex variable. void render() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glTranslatef(0.0, 0.0, -10.0); glRotatef(rotation, 1.0, 1.0, 1.0); glBindTexture(GL_TEXTURE_2D, tex); glBegin(GL_QUADS); glTexCoord2f(1.0, 0.0); glVertex3f(-2.0, 2.0, 0.0); glTexCoord2f(1.0, 1.0); glVertex3f(2.0, 2.0, 0.0); glTexCoord2f(1.0, 0.0); glVertex3f(2.0, -2.0, 0.0); glTexCoord2f(0.0, 0.0); glVertex3f(-2.0, -2.0, 0.0); glEnd(); } That's my render function for all my OpenGL render stuff... The render function is called in the main function which contains a game loop. Here is the runtime error when I run it with Visual C++ Unhandled exception at 0x004ffee9 in OpenGL Crap.exe: 0xC0000005: Access violation reading location 0x05c90000. So I have only had this error when I added textures... ... So I found where the error occured it was at this line glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,image->w,image->h,0,GL_RGBA,GL_UNSIGNED_BYTE,image->pixels); but I have totally no Idea what could it be. Update Only thanks to zero298

    Read the article

  • Adding procedural C openGL functions to an iPhone project

    - by user309595
    I want to add a few drawing functions to an iPhone project for drawing things. Something like drawTile(x,y,len,wid); which would call openGL to draw a box somewhere. I should just be able to write a procedural C file to do this but the openGL libraries are objective C and I'm getting weird errors. Do I have to make a class for all of my drawing commands and call class methods?

    Read the article

  • Writing OpenGL enabled GUI

    - by Jaen
    I am exploring a possibility to write a kind of a notebook analogue that would reproduce the look and feel of using a traditional notebook, but with the added benefit of customizing the page in ways you can't do on paper - ask the program to lay ruled paper here, grid paper there, paste an image, insert a recording from the built-in camera, try to do handwriting recognition on the tablet input, insert some latex for neat formulas and so on. I'm pretty interested in developing it just to see if writing notes on computer can come anywhere close to the comfort plain paper + pencil offer (hard to do IMO) and can always turn it in as a university C++ project, so double gain there. Coming from the type of project there are certain requirements for the user interface: the user will be able to zoom, move and rotate the notebook as he wishes and I think it's pretty sensible delegate it to OpenGL, so the prospective GUI needs to work well with OGL (preferably being rendered in it) the interface should be navigable with as little of keyboard input as user wishes (incorporating some sort of gestures maybe) up to limiting the keyboard keys as modifiers to the pen movements and taps; this includes tablet and possible multitouch support the interface should keep out of the way where not needed and come up where needed and be easily layerable the notebook sheet itself will be a container for objects representing the notebook blurbs, so it would be nice if the GUI would be able to overlay some frames over the exact parts of the OpenGL-drawn sheet to signify what can be done with given part (like moving, rotating, deleting, copying, editing etc.) and it's extents In terms of interface it's probably going to end up similar to Alias' Sketch Book Pro: [picture][http://1.bp.blogspot.com/_GGxlzvZW-CY/SeKYA_oBdSI/AAAAAAAAErE/J6A0kyXiuqA/s400/Autodesk_Alias_SketchBook_Pro_2.jpg] As far as toolkits go I'm considering Qt and nui, but I'm not really aware how well would they match up the requirements and how well would they handle such an application. As far as I know you can somehow coerce Qt into doing widget drawing with OpenGL, but on the other hand I heard voices it's slot-signal framework isn't exactly optimal and requires it's own preprocessor and I don't know how hard would be to do all the custom widgets I would need (say color-wheel, ruler, blurb frames, blurb selection, tablet-targeted pop-up menu etc.) in the constraints of Qt. Also quite a few Qt programs I've had on my machine seemed really sluggish, but it may be attributed to me having old PC or programmers using Qt suboptimally rather to the framework itself. As for [nui][http://www.libnui.net/] I know it's also cross-platform and all of the basic things you would require of a GUI toolkit and what is the biggest plus it is OpenGL-enabled from the start, but I don't know how it is with custom widgets and other facets and it certainly has smaller userbase and less elaborate documentation than Qt. The question goes as this: Does any of these toolkits fulfill (preferably all of) the requirements or there is a well fitting toolkit I haven't come across or maybe I should just roll up my sleeves, get SFML (or maybe Clutter would be more suited to this?) and something like FastDelegates or libsigc++ and program the GUI framework from the ground up myself? I would be very glad if anyone had experience with a similar GUI project and can offer some comments on how well these toolkits hold up or is it worthwhile to pursue own GUI toolkit in this case. Sorry for longwindedness, duh.

    Read the article

  • gluNewQuadric() before opengl's initialization

    - by Schrödinger's cat
    Hello, I'm working on a c++ code that uses SDL/opengl. Is this possible to create a pointer to a quadric with 'gluNewQuadric()' before having initialized opengl with 'SDL_SetVideoMode'? The idea is to create a class with a (pointer to a) quadric class member that has to be instantiate before the 'SDL_SetVideoMode' call. This pointer is initialized in the class' constructor with a 'gluNewQuadric()' call.

    Read the article

  • Correct use of VAO's in OpenGL ES2 for iOS?

    - by sak
    I'm migrating to OpenGL ES2 for one of my iOS projects, and I'm having trouble to get any geometry to render successfully. Here's where I'm setting up the VAO rendering: void bindVAO(int vertexCount, struct Vertex* vertexData, GLushort* indexData, GLuint* vaoId, GLuint* indexId){ //generate the VAO & bind glGenVertexArraysOES(1, vaoId); glBindVertexArrayOES(*vaoId); GLuint positionBufferId; //generate the VBO & bind glGenBuffers(1, &positionBufferId); glBindBuffer(GL_ARRAY_BUFFER, positionBufferId); //populate the buffer data glBufferData(GL_ARRAY_BUFFER, vertexCount, vertexData, GL_STATIC_DRAW); //size of verte position GLsizei posTypeSize = sizeof(kPositionVertexType); glVertexAttribPointer(kVertexPositionAttributeLocation, kVertexSize, kPositionVertexTypeEnum, GL_FALSE, sizeof(struct Vertex), (void*)offsetof(struct Vertex, position)); glEnableVertexAttribArray(kVertexPositionAttributeLocation); //create & bind index information glGenBuffers(1, indexId); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, *indexId); glBufferData(GL_ELEMENT_ARRAY_BUFFER, vertexCount, indexData, GL_STATIC_DRAW); //restore default state glBindVertexArrayOES(0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); glBindBuffer(GL_ARRAY_BUFFER, 0); } And here's the rendering step: //bind the frame buffer for drawing glBindFramebuffer(GL_FRAMEBUFFER, outputFrameBuffer); glClear(GL_COLOR_BUFFER_BIT); //use the shader program glUseProgram(program); glClearColor(0.4, 0.5, 0.6, 0.5); float aspect = fabsf(320.0 / 480.0); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.0f); GLKMatrix4 mvpMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); //glUniformMatrix4fv(projectionMatrixUniformLocation, 1, GL_FALSE, projectionMatrix.m); glUniformMatrix4fv(modelViewMatrixUniformLocation, 1, GL_FALSE, mvpMatrix.m); glBindVertexArrayOES(vaoId); glDrawElements(GL_TRIANGLE_FAN, kVertexCount, GL_FLOAT, &indexId); //bind the color buffer glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer); [context presentRenderbuffer:GL_RENDERBUFFER]; The screen is rendering the color passed to glClearColor correctly, but not the shape passed into bindVAO. Is my VAO being built correctly? Thanks!

    Read the article

  • How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • How to snap a 2D Quad to the mouse cursor using OpenGL 3.0/WIN32?

    - by NoobScratcher
    I've been having issues trying to snap a 2D Quad to the mouse cursor position I'm able : 1.) To get values into posX, posY, posZ 2.) Translate with the values from those 3 variables But the quad positioning I'm not able to do correctly in such a way that the 2D Quad is near the mouse cursor using those values from those 3 variables eg."posX, posY, posZ" I need the mouse cursor in the center of the 2D Quad. I'm hoping someone can help me achieve this. I've tried searching around with no avail. Heres the function that is ment to do the snapping but instead creates weird flicker or shows nothing at all only the 3d models show up : void display() { glClearColor(0.0,0.0,0.0,1.0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); for(std::vector<GLuint>::iterator I = cube.begin(); I != cube.end(); ++I) { glCallList(*I); } if(DrawArea == true) { glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); cerr << winZ << endl; glGetDoublev(GL_MODELVIEW_MATRIX, modelview); glGetDoublev(GL_PROJECTION_MATRIX, projection); glGetIntegerv(GL_VIEWPORT, viewport); gluUnProject(winX, winY, winZ , modelview, projection, viewport, &posX, &posY, & posZ); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, DrawAreaSurface->w, DrawAreaSurface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, DrawAreaSurface->pixels); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glTranslatef(posX , posY, posZ); glBegin(GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f(0.5, 0.5, 0); glTexCoord2f (1.0, 0.0); glVertex3f(0, 0.5, 0); glTexCoord2f (1.0, 1.0); glVertex3f(0, 0, 0); glTexCoord2f (0.0, 1.0); glVertex3f(0.5, 0, 0); glEnd(); } SwapBuffers(hDC); } I'm using : OpenGL 3.0 WIN32 API C++ GLSL if you really want the full source here it is - http://pastebin.com/1Ncm9HNf , Its pretty messy.

    Read the article

  • (Android) How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • How to snap a 2D Quad to the mouse cursor using OpenGL 3.0?

    - by NoobScratcher
    I've been having issues trying to snap a 2D Quad to the mouse cursor position I'm able : 1.) To get values into posX, posY, posZ 2.) Translate with the values from those 3 variables But the quad positioning I'm not able to do correctly in such a way that the 2D Quad is near the mouse cursor using those values from those 3 variables eg."posX, posY, posZ" I need the mouse cursor in the center of the 2D Quad. I'm hoping someone can help me achieve this. I've tried searching around with no avail. Heres the function that is ment to do the snapping but instead creates weird flicker or shows nothing at all only the 3d models show up : void display() { glClearColor(0.0,0.0,0.0,1.0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); for(std::vector<GLuint>::iterator I = cube.begin(); I != cube.end(); ++I) { glCallList(*I); } if(DrawArea == true) { glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); cerr << winZ << endl; glGetDoublev(GL_MODELVIEW_MATRIX, modelview); glGetDoublev(GL_PROJECTION_MATRIX, projection); glGetIntegerv(GL_VIEWPORT, viewport); gluUnProject(winX, winY, winZ , modelview, projection, viewport, &posX, &posY, & posZ); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, DrawAreaSurface->w, DrawAreaSurface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, DrawAreaSurface->pixels); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glTranslatef(posX , posY, posZ); glBegin(GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f(0.5, 0.5, 0); glTexCoord2f (1.0, 0.0); glVertex3f(0, 0.5, 0); glTexCoord2f (1.0, 1.0); glVertex3f(0, 0, 0); glTexCoord2f (0.0, 1.0); glVertex3f(0.5, 0, 0); glEnd(); } SwapBuffers(hDC); } I'm using : OpenGL 3.0 WIN32 API C++ GLSL if you really want the full source here it is - http://pastebin.com/1Ncm9HNf , Its pretty messy.

    Read the article

  • iPod/iPhone OpenGL ES UIView flashes when updating

    - by Dave Viner
    I have a simple iPhone application which uses OpenGL ES (v1) to draw a line based on the touches of the user. In the XCode Simulator, the code works perfectly. However, when I install the app onto an iPod or iPhone, the OpenGL ES view "flashes" when drawing the line. If I disable the line drawing, the flash disappears. By "flash", I mean that the background image (which is an OpenGL texture) disappears momentarily, and then reappears. It appears as if the entire scene is completely erased and redrawn. The code which handles the line drawing is the following: renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end { static GLfloat* vertexBuffer = NULL; static NSUInteger vertexMax = 64; NSUInteger vertexCount = 0, count, i; //Allocate vertex array buffer if(vertexBuffer == NULL) vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat)); //Add points to the buffer so there are drawing points every X pixels count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1); for(i = 0; i < count; ++i) { if(vertexCount == vertexMax) { vertexMax = 2 * vertexMax; vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat)); } vertexBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count); vertexBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count); vertexCount += 1; } //Render the vertex array glVertexPointer(2, GL_FLOAT, 0, vertexBuffer); glDrawArrays(GL_POINTS, 0, vertexCount); //Display the buffer [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } (This function is based on the function of the same name from the GLPaint sample application.) For the life of me, I can not figure out why this causes the screen to flash. The line is drawn properly (both in the Simulator and in the iPod). But, the flash makes it unusable. Anyone have ideas on how to prevent the "flash"?

    Read the article

  • iPhone OpenGL ES Texture2D Masking

    - by Robert Neagu
    What's the best choice when trying to mask a texture like ColorSplash or other apps like iSteam, etc? I started learning OPENGL ES like... 4 days ago (I'm a total rookie) and tried the following approach: 1) I created a colored texture2D, a grayscale version of the first texture and a third texture2D called mask 2) I also created a texture2D for the brush... which is grayscale and it's opaque (brush = black = 0,0,0,1 and surroundings = white = 1,1,1,1). My intention was to create an antialiased brush with smooth edges but i'm fine with a normal one right now 3) I searched for masking techniques on the internet and found this tutorial ZeusCMD - Design and Development Tutorials : OpenGL ES Programming Tutorials - Masking about masking. The tutorial tells me to use blending to achieve masking... first draw colored, then mask with glBlendFunc(GL_DST_COLOR, GL_ZERO) and then grayscale with glBlendFunc(GL_ONE, GL_ONE) ... and this gives me something close to what i want... but not exactly what i want. The result is masked but it's somehow overbright-ed 4) For drawing to the mask texture i used an extra frame buffer object (FBO) I'm not really happy with the resulting image (overbright-ed picture) nor with the speed achieved with this method. I think the normal way was to draw directly to the grayscale (overlay) texture2D affecting only it's alpha channel in the places where the brush hits. Is there a fast way to achieve this? I have searched a lot and never got an answer that's clear and understandable. Then, in the main draw loop I could only draw the colored texture and then blend the grayscale ontop with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I just want to learn to use OPENGL ES and it's driving me nuts because i can't get it to work properly. An advice, a link to a tutorial would be much appreciated.

    Read the article

  • How to make smooth movements in OpenGL?

    - by thyrgle
    So this kind of on topic to my other OpenGL question (not my OpenGL ES question but OpenGL the desktop version). If you have someone press a key to move a square how do you make the square movement naturally and less jumpy but also at the same speed I have it now? This is my code for the glutKeyboardFunc() function: void handleKeypress(unsigned char key, int x, int y) { if (key == 'w') { for (int i = 0; i < 12; i++) { if (i == 1 || i == 7 || i == 10 || i == 4) { square[i] = square[i] + 1; } } glutPostRedisplay(); } if (key == 'd') { for (int i = 0; i < 12; i++) { if (i == 0 || i % 3 == 0) { square[i] = square[i] + 1; } } glutPostRedisplay(); } if (key == 's') { for (int i = 0; i < 12; i++) { if (i == 1 || i == 7 || i == 10 || i == 4) { square[i] = square[i] - 1; } } glutPostRedisplay(); } if (key == 'a') { for (int i = 0; i < 12; i++) { if (i == 0 || i % 3 == 0) { square[i] = square[i] - 1; } } glutPostRedisplay(); } } I'm sorry if this doesn't quite make sense I'll try to rephrase it in a better way if it doesn't make sense.

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

  • Using a single texture image unit with multiple sampler uniforms

    - by bcrist
    I am writing a batching system which tracks currently bound textures in order to avoid unnecessary glBindTexture() calls. I'm not sure if I need to keep track of which textures have already been used by a particular batch so that if a texture is used twice, it will be bound to a different TIU for the second sampler which requires it. Is it acceptable for an OpenGL application to use the same texture image unit for multiple samplers within the same shader stage? What about samplers in different shader stages? For example: Fragment shader: ... uniform sampler2D samp1; uniform sampler2D samp2; void main() { ... } Main program: ... glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, tex_id); glUniform1i(samp1_location, 0); glUniform1i(samp2_location, 0); ... I don't see any reason why this shouldn't work, but what about if the shader program also included a vertex shader like this: Vertex shader: ... uniform sampler2D samp1; void main() { ... } In this case, OpenGL is supposed to treat both instances of samp1 as the same variable, and exposes a single location for them. Therefore, the same texture unit is being used in the vertex and fragment shaders. I have read that using the same texture in two different shader stages counts doubly against GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS but this would seem to contradict that. In a quick test on my hardware (HD 6870), all of the following scenarios worked as expected: 1 TIU used for 2 sampler uniforms in same shader stage 1 TIU used for 1 sampler uniform which is used in 2 shader stages 1 TIU used for 2 sampler uniforms, each occurring in a different stage. However, I don't know if this is behavior that I should expect on all hardware/drivers, or if there are performance implications.

    Read the article

  • What method replaces GL_SELECT for box selection?

    - by Jake
    Last 2 weeks I started working on a box selection that selects shapes using GL_SELECT and I just got it working finally. When looking up resources online, there is a significant number of posts that say GL_SELECT is deprecated in OpenGL 3.0, but there is no mention of what had replace that function. I learnt OpenGL 1.2 in back in college 2 years back but checking wikipedia now, I realise we already have OpenGL 4.0 but I am unaware of what I need to do to keep myself up to date. So, in the meantime, what would be the latest preferred method for box selection? EDIT: I found http://www.khronos.org/files/opengl-quick-reference-card.pdf on page 5 this card still lists glRenderMode(GL_SELECT) as part of the OpenGL 3.2 reference.

    Read the article

  • Developing OpenGLES2 apps for Ubuntu Software Center

    - by Bram
    I have a game for iOS and Android that I now want to port to Ubuntu. I plan to distribute it with Ubuntu Software Center. Preferrably for free with an in-app-purchase. My codebase is currently based on OpenGL ES2 and written in C++. I could rewrite to OpenGL, but having progammable shaders is a must. Fixed pipeline OpenGL will not suffice. Is there a feature in place that lets you specify OpenGL requirements in the Ubuntu Software Center? I want to make sure that only Ubuntu users with compatible hardware will be able to download my game. Any APIs I could use for getting a suitable OpenGL context, or am I expected to just use glx for this? Or is the use of GTK mandatory?

    Read the article

  • Marrying Core Animation with OpenGL ES

    - by Ole Begemann
    Edit: I suppose instead of the long explanation below I might also ask: Sending -setNeedsDisplay to an instance of CAEAGLLayer does not cause the layer to redraw (i.e., -drawInContext: is not called). Instead, I get this console message: <GLLayer: 0x4d500b0>: calling -display has no effect. Is there a way around this issue? Can I invoke -drawInContext: when -setNeedsDisplay is called? Long explanation below: I have an OpenGL scene that I would like to animate using Core Animation animations. Following the standard approach to animate custom properties in a CALayer, I created a subclass of CAEAGLLayer and defined a property sceneCenterPoint in it whose value should be animated. My layer also holds a reference to the OpenGL renderer: #import <UIKit/UIKit.h> #import <QuartzCore/QuartzCore.h> #import "ES2Renderer.h" @interface GLLayer : CAEAGLLayer { ES2Renderer *renderer; } @property (nonatomic, retain) ES2Renderer *renderer; @property (nonatomic, assign) CGPoint sceneCenterPoint; I then declare the property @dynamic to let CA create the accessors, override +needsDisplayForKey: and implement -drawInContext: to pass the current value of the sceneCenterPoint property to the renderer and ask it to render the scene: #import "GLLayer.h" @implementation GLLayer @synthesize renderer; @dynamic sceneCenterPoint; + (BOOL) needsDisplayForKey:(NSString *)key { if ([key isEqualToString:@"sceneCenterPoint"]) { return YES; } else { return [super needsDisplayForKey:key]; } } - (void) drawInContext:(CGContextRef)ctx { self.renderer.centerPoint = self.sceneCenterPoint; [self.renderer render]; } ... (If you have access to the WWDC 2009 session videos, you can review this technique in session 303 ("Animated Drawing")). Now, when I create an explicit animation for the layer on the keyPath @"sceneCenterPoint", Core Animation should calculate the interpolated values for the custom properties and call -drawInContext: for each step of the animation: - (IBAction)animateButtonTapped:(id)sender { CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:@"sceneCenterPoint"]; animation.duration = 1.0; animation.fromValue = [NSValue valueWithCGPoint:CGPointZero]; animation.toValue = [NSValue valueWithCGPoint:CGPointMake(1.0f, 1.0f)]; [self.glView.layer addAnimation:animation forKey:nil]; } At least that is what would happen for a normal CALayer subclass. When I subclass CAEAGLLayer, I get this output on the console for each step of the animation: 2010-12-21 13:59:22.180 CoreAnimationOpenGL[7496:207] <GLLayer: 0x4e0be20>: calling -display has no effect. 2010-12-21 13:59:22.198 CoreAnimationOpenGL[7496:207] <GLLayer: 0x4e0be20>: calling -display has no effect. 2010-12-21 13:59:22.216 CoreAnimationOpenGL[7496:207] <GLLayer: 0x4e0be20>: calling -display has no effect. 2010-12-21 13:59:22.233 CoreAnimationOpenGL[7496:207] <GLLayer: 0x4e0be20>: calling -display has no effect. ... So it seems that, possibly for performance reasons, for OpenGL layers, -drawInContext: is not getting called because these layers do not use the standard -display method to draw themselves. Can anybody confirm that? Is there a way around it? Or can I not use the technique I laid out above? This would mean I would have to implement the animations manually in the OpenGL renderer (which is possible but not as elegant IMO).

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >