Search Results

Search found 2515 results on 101 pages for 'opengl es2'.

Page 24/101 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Drawing a texture with an alpha channel doesn't work -- draws black

    - by DevDevDev
    I am modifying GLPaint to use a different background, so in this case it is white. Anyway the existing stamp they are using assumes the background is black, so I made a new background with an alpha channel. When I draw on the canvas it is still black, what gives? When I actually draw, I just bind the texture and it works. Something is wrong in this initialization. Here is the photo - (id)initWithCoder:(NSCoder*)coder { CGImageRef brushImage; CGContextRef brushContext; GLubyte *brushData; size_t width, height; if (self = [super initWithCoder:coder]) { CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = YES; // In this application, we want to retain the EAGLDrawable contents after a call to presentRenderbuffer. eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1]; if (!context || ![EAGLContext setCurrentContext:context]) { [self release]; return nil; } // Create a texture from an image // First create a UIImage object from the data in a image file, and then extract the Core Graphics image brushImage = [UIImage imageNamed:@"test.png"].CGImage; // Get the width and height of the image width = CGImageGetWidth(brushImage); height = CGImageGetHeight(brushImage); // Texture dimensions must be a power of 2. If you write an application that allows users to supply an image, // you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2. // Make sure the image exists if(brushImage) { brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte)); brushContext = CGBitmapContextCreate(brushData, width, width, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage); CGContextRelease(brushContext); glGenTextures(1, &brushTexture); glBindTexture(GL_TEXTURE_2D, brushTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData); free(brushData); } //Set up OpenGL states glMatrixMode(GL_PROJECTION); CGRect frame = self.bounds; glOrthof(0, frame.size.width, 0, frame.size.height, -1, 1); glViewport(0, 0, frame.size.width, frame.size.height); glMatrixMode(GL_MODELVIEW); glDisable(GL_DITHER); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA); glEnable(GL_POINT_SPRITE_OES); glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE); glPointSize(width / kBrushScale); } return self; }

    Read the article

  • drawing thick, textured lines in OpenGL

    - by NateS
    I need to draw thick textured line segments in OpenGL. Actually I need curves made out of short line segments. Here is what I have: In the upper left is an example of two connected line segments. The second image shows once the lines are given width, they overlap. If I apply a texture that uses translucency, the overlap looks terrible. The third image shows that both lines are shortened by half the amount necessary to make the thick line corners just touch. This way I can fill the space between the lines with a triangle. On the right you can see this works well (ignore the horizontal line when the crappy texture repeats). But it doesn't always work well. In the bottom left the curve is made of many short line segments. Note the incorrect texture application. My program is written in Java, making use of the LWJGL OpenGL binding (and minor use of Slick, a 2D helper framework). I've made a zip file that contains an executable JAR so you can easily see the problem. It also has the Java code (there is only one source file) and an Eclipse project, so you can instantly run it through Eclipse and hack at it if you like. Here she is: http://n4te.com/temp/lines.zip To run, execute "java -jar lines.jar". You may need "-Djava.library.path=." before -jar if you are not on Windows. Press space to toggle texture/wireframe. The wireframe only shows the line segments, the triangle between them isn't drawn. I don't need to draw arbitrary lines, just bezier curves similar to what you see in the program. Sorry the code is a bit messy, once I have a solution I will refactor. I have investigated using GLUtessellator. It greatly simplified construction of the line, but I found that applying the texture was perfect. It worked most of the time (top image below), but long vertical curves would have severe texture distortion (bottom image below): This turned out to be much easier to code, but in the end worse than my approach. I believe what I'm trying to do is called "line tessellation" or "stroke tessellation". I assume this has been solved already? Is there standard code I can leverage? Otherwise, how can I fix my code so that the texture does not freak out on short, vertical curves?

    Read the article

  • Pyglet OpenGL drawing anti-aliasing

    - by Jared Forsyth
    I've been looking around for a way to anti-alias lines in OpenGL, but none of them seem to work... here's some example code: import pyglet from pyglet.gl import * window = pyglet.window.Window(resizable=True) @window.event def on_draw(): window.clear() pyglet.gl.glColor4f(1.0,0,0,1.0) glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) glEnable (GL_BLEND) glEnable (GL_LINE_SMOOTH); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE) glLineWidth (3) pyglet.graphics.draw(2, pyglet.gl.GL_LINES, ('v2i', (10, 15, 300, 305)) ) pyglet.app.run() Can anyone see what I am doing wrong?

    Read the article

  • iPhone OpenGL ES: How do I use gravity vector to correctly transform scene for augmented reality

    - by gpdawson
    I'm trying figure out how to get an OpenGL specified object to be displayed correctly according to the device orientation (ie. according to the gravity vector from the accelerometer, and heading from compass). The GLGravity sample project has an example which is almost like this (despite ignoring heading), but it has some glitches. For example, the teapot jumps 180deg as the device viewing angle crosses the horizon, and it also rotates spuriously if you tilt the device from portrait into landscape. This is fine for the context of this app, as it just shows off an object and it doesn't matter that it does these things. But it means that the code just doesn't work when you attempt to emulate real life viewing of an OpenGL object according to the device's orientation. What happens is that it almost works, but the heading rotation you apply from the compass gets "corrupted" by the spurious additional rotations seen in the GLGravity example project. Can anyone provide sample code that shows how to adjust correctly for the device orientation (ie. gravity vector), or to fix the GLGravity example so that it doesn't include spurious heading changes? //Clear matrix to be used to rotate from the current referential to one based on the gravity vector bzero(matrix, sizeof(matrix)); matrix[3][3] = 1.0; //Setup first matrix column as gravity vector matrix[0][0] = accel[0] / length; matrix[0][1] = accel[1] / length; matrix[0][2] = accel[2] / length; //Setup second matrix column as an arbitrary vector in the plane perpendicular to the gravity vector {Gx, Gy, Gz} defined by by the equation "Gx * x + Gy * y + Gz * z = 0" in which we arbitrarily set x=0 and y=1 matrix[1][0] = 0.0; matrix[1][1] = 1.0; matrix[1][2] = -accel[1] / accel[2]; length = sqrtf(matrix[1][0] * matrix[1][0] + matrix[1][1] * matrix[1][1] + matrix[1][2] * matrix[1][2]); matrix[1][0] /= length; matrix[1][1] /= length; matrix[1][2] /= length; //Setup third matrix column as the cross product of the first two matrix[2][0] = matrix[0][1] * matrix[1][2] - matrix[0][2] * matrix[1][1]; matrix[2][1] = matrix[1][0] * matrix[0][2] - matrix[1][2] * matrix[0][0]; matrix[2][2] = matrix[0][0] * matrix[1][1] - matrix[0][1] * matrix[1][0]; //Finally load matrix glMultMatrixf((GLfloat*)matrix);

    Read the article

  • Qt 4.6 OpenGL GLSL

    - by Zeke
    I'm trying to find like NeHe's tutorials for Qt that are all in GLSL. Because lets face it, OpenGL in the old days are dead and Shaders are the only way now. And with Qt-4.6 they introduced the QMatrix4x4, QVector3, and the Shader classes. But I cannot find any tutorials for this. All the ones I do find, all use crappy SDL and/or GLUT (Which are just plain useless).

    Read the article

  • Eye candy in OpenGL

    - by anon
    I'm interested in creating realtime visual special effects. I am limited to OpenGL (in particular, computing power of a MacBook Pro). I want to learn more about doing cool UI/special effects (think the "computers/displays" in Iron Man / Avatar). What are good books/resources for this? Thanks!

    Read the article

  • Lightning effect in opengl es

    - by sad_panda
    Is there a way to create a lightning effect on the iPhone using opengl?(like this app) Right now I have modified the glpaint sample to draw random points around a line (between two points that the user touches) and then connecting them, but the result is a zigzag line that constantly jumps around and lags horribly on the actual device.

    Read the article

  • obtain OpenGL camera(view) matrix from openCV findhomography

    - by user1828449
    I want to build an AR application on Android by opencv and opengl. I found GL_MODELVIEW can place camera and model in world coordinates like the following link I tried to load a simple model-view matrix by gl.glLoadMatrixf(newMat, 0); and it works so I want to draw 3d model on the top of my target image if i got the four corner points of the target image because model-view matrix is needed I want to know if I can obtain the camera view matrix by opencv's findhomography

    Read the article

  • draw ellipse in MFC C++ just use OPENGL?

    - by taki
    i trying to draw ellipse in MFC C++ just use OPENGL. it ran for the resulting ellipse but it is not correct mouse coordinates. My code: in class Ellispe.cpp void VeEllispe::putPixel(int x,int y,int xc, int yc) { glBegin(GL_POINTS);// bat dau bang ve diem glVertex2i(xc+x,yc+y); glVertex2i(xc-x,yc-y); glVertex2i(xc+x,yc-y); glVertex2i(xc-x,yc+y); glEnd(); } void VeEllispe::Draw(int a, int b,int xc,int yc) { int x = 0; int y = b; float a2 = (a*a); float b2 = (b*b); float p = b2 - a2*b + 0.25*a2; while(2*b2*x <= 2*y*a2) { putPixel(x,y,xc,yc); if (p < 0) p+= 2*b2*x + 3*b2; else { p+= 2*b2*x + 3*b2 - 2*a2*y + 2*a2; y--; } x++; } x = a; y = 0; p = a2 - b2*a + 0.25*b2; while(2*a2*y <= 2*x*b2) { putPixel(x,y,xc, yc); if (p < 0) p+= 2*a2*y + 3*a2; else { p+= 2*a2*y + 3*a2 - 2*b2*x + 2*b2; x--; } y++; } } in class XYZView.cpp . . VeEllispe e; void Cbaitap1View::OnDraw(CDC* /*pDC*/) { Cbaitap1Doc* pDoc = GetDocument(); ASSERT_VALID(pDoc); if (!pDoc) return; wglMakeCurrent(m_hDC,m_hRC); glClear(GL_COLOR_BUFFER_BIT); glClearColor(1.0,1.0,1.0,0.0); glColor3f(0.0,0.0,1.0); glPointSize(2); if (state==4) e.Draw(X2,Y2,X1,Y1); glFlush(); SwapBuffers(::GetDC(GetSafeHwnd())); wglMakeCurrent(NULL,NULL); } And if possible, can you teach me the document to drawing triangles or parapol use OPENGL?

    Read the article

  • OpenGl glutIdleFunc(void (*func)(void))

    - by PHENOMERICAN
    I'm trying to design very simple animation in OpenGL such as rotating and translating objects. In the red book, I found that using GLUT's glutIdleFunc() is okay for a simple animation. How many times does glutIdleFunc(...) call the function in one second? Thank you.

    Read the article

  • OpenGL ES depth buffer

    - by Istvan
    Hi! I was wondering if I can deallocate the depth buffer in iPhone OpenGL ES to conserve memory? Or it stays until the application finishes? I only need the depth testing in the beginning of the application.

    Read the article

  • iPad OpenGL ES FPS too slow!

    - by pop850
    I'm currently working on an OpenGL ES 1.1 app for the iPad its running at full 768x1024 iPad resolution, with textures, polygons, and the works but only at about 30 fps! (not fast enough for my purposes) im pretty sure its not my code, because when i lowered the resolution, the FPS increased, eventually the normal 60 at iPod touch resoultion Is anyone else encountering this FPS slowdown? should I reduce the size then scale up? any guidance is much appreciated!

    Read the article

  • Trouble with Native OpenGL Renderer

    - by CaseyB
    I am using Native code to render OpenGL in Android and I get periodic errors that look like this: ERROR/IMGSRV(1435): frameresource.c:610: WaitUntilResourceIsNotNeeded: PVRSRVEventObjectWait failed ERROR/IMGSRV(1018): sgxif.c:124: WaitForRender: PVRSRVEventObjectWait failed ERROR/IMGSRV(1435): osfunc_um.c:318: PVRSRVEventObjectWait: Error 13 returned Once these errors come up I have to restart the phone or the rendering won't start again correctly. I have done a lot of web searching and I can't find out what could be the cause of these errors. Does anyone else have any suggestions?

    Read the article

  • Display arbitrary size 2d image in opengl

    - by Martin Beckett
    I need to display 2d images in opengl using textures. The image dimensions are not necessarily powers of 2. I thought of creating a larger texture and restricting the display to the part I was using but the image data will be shared with openCV so I don't want to copy data a pixel at a time into a larger texture. EDIT - it turns out that even the simplest Intel on board graphics under Windows supports none-power-of-2 textures.

    Read the article

  • OpenGL Motion blur with the accumulation buffer

    - by Klaus
    Hello, I'm trying to achieve a motion blur effect in my OpenGL application. I read somewhere this solution, using the accumulation buffer: glAccum(GL_MULT, 0.90); glAccum(GL_ACCUM, 0.10); glAccum(GL_RETURN, 1.0); glFlush(); at the end of the render loop. But nothing happens... What I am missing ?

    Read the article

  • Detecting Touches in an OpenGL rendered scene

    - by Icky
    Hey. I was wondering whether there is a way to detect a touch in an OpenGL rendered scene. What I have i a set of images which are being rendered in my main view. Now if the user touches one of these images (or objects) I would like to know which one was touched - similar to the CGRectContainsPoint(frame, [touch locationInView:self.view] method. Is there an easy way to find out? If there is none, this would also help.

    Read the article

  • OpenGL suppresses exceptions in MFC dialog-based application

    - by Mikhail
    Hello. I have an MFC-driven dialog-based application created with MSVS2005. Here is my problem step by step. I have button on my dialog and corresponding click-handler with code like this: int* i = 0; *i = 3; I'm running debug version of program and when I click on the button, Visual Studio catches focus and alerts "Access violation writing location" exception, program cannot recover from the error and all I can do is to stop debugging. And this is the right behavior. Now I add some OpenGL initialization code in the OnInitDialog() method: HDC DC = GetDC(GetSafeHwnd()); static PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), // size of this pfd 1, // version number PFD_DRAW_TO_WINDOW | // support window PFD_SUPPORT_OPENGL | // support OpenGL PFD_DOUBLEBUFFER, // double buffered PFD_TYPE_RGBA, // RGBA type 24, // 24-bit color depth 0, 0, 0, 0, 0, 0, // color bits ignored 0, // no alpha buffer 0, // shift bit ignored 0, // no accumulation buffer 0, 0, 0, 0, // accum bits ignored 32, // 32-bit z-buffer 0, // no stencil buffer 0, // no auxiliary buffer PFD_MAIN_PLANE, // main layer 0, // reserved 0, 0, 0 // layer masks ignored }; int pixelformat = ChoosePixelFormat(DC, &pfd); SetPixelFormat(DC, pixelformat, &pfd); HGLRC hrc = wglCreateContext(DC); ASSERT(hrc != NULL); wglMakeCurrent(DC, hrc); Of course this is not exactly what I do, it is the simplified version of my code. Well now the strange things begin to happen: all initialization is fine, there are no errors in OnInitDialog(), but when I click the button... no exception is thrown. Nothing happens. At all. If I set a break-point at the *i = 3; and press F11 on it, the handler-function halts immediately and focus is returned to the application, which continue to work well. I can click button again and the same thing will happen. It seems like someone had handled occurred exception of access violation and silently returned execution into main application message-receiving cycle. If I comment the line wglMakeCurrent(DC, hrc);, all works fine as before, exception is thrown and Visual Studio catches it and shows window with error message and program must be terminated afterwards. I experience this problem under Windows 7 64-bit, NVIDIA GeForce 8800 with latest drivers (of 11.01.2010) available at website installed. My colleague has Windows Vista 32-bit and has no such problem - exception is thrown and application crashes in both cases. Well, hope good guys will help me :) PS The problem originally where posted under this topic.

    Read the article

  • Drawing bitmaps faster on Android canvas or OpenGL

    - by Ben Mc
    I currently have a game written using the Android canvas. It is completely 2D, and I draw bitmaps as sprites on the canvas, and it technically works, but I have a few features that I need to add that will require drawing many more bitmaps on the screen, and there will be a lot more movement. The app needs more power. What is the best way to go from this method of drawing Bitmaps on a canvas to using OpenGL so I can draw them faster?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >