Search Results

Search found 2513 results on 101 pages for 'opengl'.

Page 23/101 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Why does OpenGL seem to ignore my glBindTexture call?

    - by Killrazor
    I'm having problems making a simple sprite rendering. I load 2 different textures. Then, I bind these textures and draw 2 squares, one with each texture. But only the texture of the first rendered object is drawn in both squares. Its like if I'd only use a texture or as if glBindTexture don't work properly. I know that GL is a state machine, but I think that you only need to change active texture with glBindTexture. I load texture with this method: bool CTexture::generate( utils::CImageBuff* img ) { assert(img); m_image = img; CHECKGL(glGenTextures(1,&m_textureID)); CHECKGL(glBindTexture(GL_TEXTURE_2D,m_textureID)); CHECKGL(glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR)); CHECKGL(glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR)); //CHECKGL(glTexImage2D(GL_TEXTURE_2D,0,img->getBpp(),img->getWitdh(),img->getHeight(),0,img->getFormat(),GL_UNSIGNED_BYTE,img->getImgData())); CHECKGL(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img->getWitdh(), img->getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img->getImgData())); return true; } And I bind textures with this function: void CTexture::bind() { CHECKGL(glBindTexture(GL_TEXTURE_2D,m_textureID)); } Also, I draw sprites with this method void CSprite2D::render() { CHECKGL(glLoadIdentity()); CHECKGL(glEnable(GL_TEXTURE_2D)); CHECKGL(glEnable(GL_BLEND)); CHECKGL(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); m_texture->bind(); CHECKGL(glPushMatrix()); CHECKGL(glBegin(GL_QUADS)); CHECKGL(glTexCoord2f(m_textureAreaStart.s,m_textureAreaStart.t)); // 0,0 by default CHECKGL(glVertex3i(m_position.x,m_position.y,0)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s,m_textureAreaStart.t)); // 1,0 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y, 0)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s, m_textureAreaEnd.t)); // 1,1 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y + m_dimensions.y, 0)); CHECKGL(glTexCoord2f(m_textureAreaStart.s, m_textureAreaEnd.t)); // 0,1 by default CHECKGL(glVertex3i( m_position.x, m_position.y + m_dimensions.y,0)); CHECKGL(glPopMatrix()); CHECKGL(glDisable(GL_BLEND)); } Edit: I bring also the check error code: int CheckGLError(const char *GLcall, const char *file, int line) { GLenum errCode; //avoids infinite loop int errorCount = 0; while ( (errCode=glGetError()) != GL_NO_ERROR && ++errorCount < 3000) { utils::globalLogPtr log = utils::CGLogFactory::getLogInstance(); const GLubyte *errString; errString = gluErrorString(errCode); std::stringstream ss; ss << "In "<< __FILE__<<"("<< __LINE__<<") "<<"GL error with code: " << errCode<<" at file " << file << ", line " << line << " with message: " << errString << "\n"; log->addMessage(ss.str(),ZEL_APPENDER_GL,utils::LOGLEVEL_ERROR); } return 0; }

    Read the article

  • What techniques can I use to render very large numbers of objects more efficiently in OpenGL?

    - by Luke
    You can think of my application as drawing a very large ball-and-stick diagram (or graph). At times, this graph can get very large, where the number of elements even outnumbers the pixels on the screen. Currently I am simply passing all of my textures (as GL_POINTS) and lines to the graphics card using VBO's. When the number of elements outnumbers the number of pixels, is this the most efficient way to do this? Or should I do some calculations on the CPU side before handing everything over to the GPU? If it matters, I do use GL_DEPTH_TEST and GL_ALPHA_TEST. I do some alpha blending, but probably not enough to make a huge performance difference. My scene can be static at times, but the user has control over a typical arc-ball camera and can pan, rotate, or zoom. It is during these operations that performance degradation is noticeable.

    Read the article

  • Triangles in a C++ STL Vector as an Objective-C member sometimes draws incorrectly in OpenGL ES

    - by Rahil627
    The polygons draw correctly 80% of the time. When it fails, a vertex is dislocated. The polygon is consistently drawn with the wrong vertex. I checked that the vector is correct during initialization, even when it's wrongly drawn. I'm using Cocos2d. The class member: @interface Polygon : CCSprite { std::vector<float> triangleVertices; } The draw function called in [Polygon draw]: + (void)drawTrianglesWithVertices:(const std::vector<float> &)v { //glEnableClientState(GL_VERTEX_ARRAY); glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_COLOR_ARRAY); glVertexPointer(2, GL_FLOAT, 0, &v[0]); glDrawArrays(GL_TRIANGLES, 0, v.size()); //glDisableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); } Any ideas?

    Read the article

  • Understanding how to go from a scene to what's actually rendered to screen in OpenGL?

    - by Pris
    I want something that explains step by step how, after setting up a simple scene I can go from that 'world' space, to what's finally rendered on my screen (ie, actually implement something). I need the resource to clearly show how to derive and set up both orthographic and perspective projection matrices... basically I want to thoroughly understand what's going on behind the scenes and not plug in random things without knowing what they do. I've found lots of half explanations, presentation slides, walls of text, etc that aren't really doing much for me. I have a basic understanding of linear algebra/matrix transforms, and a rough idea of what's going on when you go from model space - screen, but not enough to actually implement it in code.

    Read the article

  • How can I get the camera to follow a moving object from behind in C++ and openGL [closed]

    - by user1324894
    I am trying to get the camera to follow an object that moves around my environment using the gluLookAt function. This is my code for the object moving in the direction that it faces: Xtri += -Vtri*cos((90+heading)*(PI/180.0f)); Ztri += Vtri*sin((90+heading)*(PI/180.0f)); I then render the object: glPushMatrix(); glTranslatef(Xtri,0,Ztri); glRotatef(heading,0,1,0); drawTriangle(); glPopMatrix(); All heading is is a spin variable so that if I press left or right it spins in that direction. When you press up on the arrows it moves forward and if you press down it moves backwards in the direction that it is facing. To try and get it so the camera follows I am using the gluLookAt function like this: gluLookAt(Xtri,0,(Ztri+20), Xtri,0,Ztri, 0,1,0); So that it follows the car from a distance and should follow it around. However, the object doesn't even move at all now all it can do is rotate still but not move forwards or backwards and when it spins it doesn't follow the spin instead it just watches it turn still fixed to the same position. Where is it that I am going wrong? UPDATE: I have updated the gluLookAt function so now it is: gluLookAt((Xtri+Vtri),0,((Ztri+20)), (Xtri+Vtri),0,(Ztri), 0,1,0); This seems to move the object around. I have a stationary terrain so I can see that the object is now moving and in the direction that it is facing. However, I want the camera to follow the object when it spins as well so it is always viewing the object from behind.

    Read the article

  • Draw to offscreen renderbuffer in OpenGL ES (iPhone)

    - by David Ensminger
    I'm trying to create an offscreen render buffer in OpenGL ES on the iPhone. I've created the buffer like this: glGenFramebuffersOES(1, &offscreenFramebuffer); glBindFramebufferOES(GL_FRAMEBUFFER_OES, offscreenFramebuffer); glGenRenderbuffersOES(1, &offscreenRenderbuffer); glBindRenderbufferOES(GL_RENDERBUFFER_OES, offscreenRenderbuffer); glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, offscreenRenderbuffer); But I'm confused on how to render the storage. Apple's documentation says to use the EAGLContext renderBufferStorage:fromDrawable: method, but this seems to only work for one render buffer (the main one being displayed). If I use the normal OpenGL function glRenderBufferStorageOES, then I can't seem to get it to display. Here's the code: // this is in the initialization section: glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGB8_OES, backingWidth, backingHeight); // and this is when I'm trying to draw to it and display it: glBindFramebufferOES(GL_FRAMEBUFFER_OES, offscreenFramebuffer); GLfloat vc[] = { 0.0f, 0.0f, 0.0f, 10.0f, 10.0f, 10.0f, 0.0f, 0.0f, 0.0f, -10.0f, -10.0f, -10.0f, }; glLoadIdentity(); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, vc); glDrawArrays(GL_LINES, 0, 4); glDisableClientState(GL_VERTEX_ARRAY); glBindRenderbufferOES(GL_RENDERBUFFER_OES, offscreenRenderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; Doing it this way, nothing is displayed on the screen. However, if I switch out the references to "offscreen...Buffer" to the buffers that were created with the renderBufferStorage method, it works fine. Any suggestions?

    Read the article

  • OpenGL behaving strangely

    - by Mk12
    OpenGL is acting very strangely for some reason. In my subclass of NSOpenGLView, I have the following code in -prepareOpenGL: - (void)prepareOpenGL { GLfloat lightAmbient[] = { 0.5f, 0.5f, 0.5f, 1.0f }; GLfloat lightDiffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f }; GLfloat lightPosition[] = { 0.0f, 0.0f, 2.0f }; quality = 0; zCoord = -6; [self loadTextures]; glEnable(GL_LIGHTING); glEnable(GL_TEXTURE_2D); glShadeModel(GL_SMOOTH); glClearColor(0.2f, 0.2f, 0.2f, 0.0f); glClearDepth(1.0f); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glLightfv(GL_LIGHT1, GL_AMBIENT, lightAmbient); glLightfv(GL_LIGHT1, GL_DIFFUSE, lightDiffuse); glLightfv(GL_LIGHT1, GL_POSITION, lightPosition); glEnable(GL_LIGHT1); gameState = kGameStateRunning; int i = 0; // HERE ******** [NSTimer scheduledTimerWithTimeInterval:0.03f target:self selector:@selector(processKeys) userInfo:nil repeats:YES]; // Synchronize buffer swaps with vertical refresh rate GLint swapInt = 1; [[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval]; // Setup and start displayLink [self setupDisplayLink]; } I wanted to assign the timer that processes key input to an ivar so that I could invalidate it when the game is paused (and reinstantiate it on resume), however when I did that (as apposed to leaving it at [NSTimer scheduledTimer…), OpenGL doesn't display the cube I draw. When I take it away, it's fine. So i tried just adding a harmless statement, int i = 0; (maked // HERE *******), and that makes the lighting not work! When I take it away, everything is fine, but when I put it back, everything is dark. Can someone come up with a rational explanation for this? Thanks.

    Read the article

  • Rotating an object in OpenGL ES for iPhone [translate to origin --> rotate --> translate back is not

    - by ayanonagon
    I recently started working with OpenGL ES for the iPhone, and I am having a bit of trouble with it. I want to be able to rotate an object with your fingers. My problem is that I have my object placed at (0, 0, -3), and I would like to rotate it about its center. I know that I need to translate back to the origin, rotate, and then bring it back to the original place. I think I am facing a problem because I am using a matrix to keep track (?) of all of my rotations/translation/scaling etc, and I think it may be combining the operations in a way that order is not even considered (so the two translations would cancel each other). I just started learning OpenGL a day ago and am a complete newbie, so my assumption may be wrong. Here is the part of the in drawView that I am having trouble with: GLfloat matrix[16]; glGetFloatv(GL_MODELVIEW_MATRIX, matrix); glLoadIdentity(); glTranslatef(0, 0, 3); // bring to origin glRotatef(self.angle, self.dy, self.dx, 0); // rotate glTranslatef(0, 0, -3); // put it back in place glMultMatrixf(matrix); // save the transformations performed Help would be much appreciated, thank you!

    Read the article

  • Drawing only part of a texture OpenGL ES iPhone

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • Creating spotlight in OpenGL scene

    - by Victor Oliveira
    Im studying OpenGL and trying to create a spot light at my application. The code that Im using for my #vertex-shader is below: #:vertex-shader #{ #version 150 core in vec3 in_pos; in vec2 in_tc; out vec2 tc; glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 20.0f); GLfloat spot_direction[] = { -1.0, -1.0, 0.0 }; glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, spot_direction); glEnable(GL_LIGHT0); void main() { vec4 pos= vec4(vec3(1.0)*in_pos - vec3(1.0), 1.0); pos.z=0.0; gl_Position = pos; tc = in_tc; } } The thing is, everytime Im trying to run the code an Error that says: Type: other, Source: api, ID: 131169, Severity: low Message: Framebuffer detailed info: The driver allocated storage for renderbuffer 1. len = 157, written = 0 failed to compile vertex shader of deferred: directional info log for shader deferred: directional vertex info log for shader deferred: directional: ERROR: Unbound variable: when Specifications: Renderer: GeForce GTX 580/PCIe/SSE2 Version: 3.3.0 NVIDIA 319.17 GLSL: 3.30 NVIDIA via Cg compiler Status: Using GLEW 1.9.0 1024 x 768 OS: Linux debian I guess to create this spotlight is pretty much simple, but since Im really new to OpenGL I dont have a clue how to do it until now, even reading sources like: http://www.glprogramming.com/red/chapter05.html#name3 Read also in some place that light spots can get really hard to understand, but I cant avoid this step right now since Im following my lecture schedule. Could anybody help me?

    Read the article

  • Custom view transition in OpenGL ES

    - by melfar
    I'm trying to create a custom transition, to serve as a replacement for a default transition you would get here, for example: [self.navigationController pushViewController:someController animated:YES]; I have prepared an OpenGL-based view that performs an effect on some static texture mapped to a plane (let's say it's a copy of the flip effect in Core Animation). What I don't know how to do is: grab current view content and make a texture out of it (I remember seeing a function that does just that, but can't find it) how to do the same for the view that is currently offscreen and is going to replace current view are there some APIs I can hook to in order to make my transition class as native as possible (make it a kind of Core Animation effect)? Any thoughts or links are greatly appreciated! UPDATE Jeffrey Forbes's answer works great as a solution to capture the content of a view. What I haven't figured out yet is how to capture the content of the view I want to transition to, which should be invisible until the transition is done. Also, which method should I use to present the OpenGL view? For demonstration purposes I used pushViewController. That affects the navbar, though, which I actually want to go one item back, with animation, check this vid for explanation: http://vimeo.com/4649397. Another option would be to go with presentViewController, but that shows fullscreen. Do you think maybe creating another window (or view?) could be useful?

    Read the article

  • My 3D object (opengl es) is disappearing behind the iPhone camera view

    - by KLC
    I have an augmented reality iPhone app that I am converting from Core Animation to OpenGL ES 1.1. I have added code that has been modified from the Apple OpenGL template. My problem is that my 3D object , when translating along the negative Z-axis (away from the user), appears to disappear into the camera view, until its completely gone. I have experimented with several solutions, but to no avail. What I have determined: Using the 3D icosahedron from Jeff Lamarche's blog here, the object starts it at 0,0,0 and then translates with decreasing z coordinates. By the time the z value reaches -2.0f, the object is gone. It appears as if it is disappearing behind the camera view. This is how I set my frustrum & viewport (unchanged from Apple's code) glMatrixMode(GL_PROJECTION); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); //Grab the size of the screen CGRect rect = self.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); What I have tried: The camera view is the main view and several other views are added to it as subviews, including the openGLView. I have commented those views out for test purposes. I have applied CATransforms to move the openGLView in the z direction -500 and +500, and done the same to the camera view. I have also changed the zFar in the above code to 1.0f, and it still disappears at z position of -2.0, which doesn't make sense (shouldn't it disappear at z=1.0?) My experimentation has got me more confused than when I started ( which usually means I am missing a key piece, but I can't figure out what). Thanks for your help.

    Read the article

  • Porting WebGL game to iPhone's native OpenGL?

    - by ArtPulse
    We are developing a web game that uses WebGL for the two biggest parts of it. Working with HTML / CSS was too slow and too limited, so it's off the table. Thing is, iOS does not support WebGL publicly just yet, only on iAd. It is my guess Apple will eventually support it once the security issues they and Microsoft claim it has are fixed, and looks stable enough. Problem is, if Apple does not do this by the release of the next mayor iOS version, then we will have in our hands a mobile WebGL game that does not run. 6 months of development and testing to waste. So, questions: If that was the case, how viable (regarding amount of time) is it porting the WebGL part of the game to native iPhone OpenGL? I'm afraid that porting will take longer than the development of the game itself. I saw posts on Stack Overflow (like this) that suggested, on Android, adding the OpenGL interface manually to a WebKit element. It'd be slower than native. But either way... Is this something that could be accepted in the AppStore? Apple is very restrictive with these kind of stuff... Thank you all for your time!

    Read the article

  • OpenGL Tearing Problem

    - by kaykun
    Hi, I'm using win32 and opengl and I have a window set up with the projection at glOrtho of the window's coordinates. I have double buffering enabled, tested it with glGet as well. My program always seems to tear any primitives that I try to draw on it if it's being constantly translated. Here is my OpenGL initialization function: glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glViewport(0, 0, 640, 480); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, 640, 0, 480, 0, 100); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glDrawBuffer(GL_BACK); glLoadIdentity(); And this is my rendering function, gMouseX and gMouseY are the coordinates of the mouse: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glTranslatef(gMouseX, gMouseY, 0.0f); glColor3f(0.5f, 0.5f, 0.5f); glBegin(GL_TRIANGLES); glVertex2f(0.0f, 128.0f); glVertex2f(128.0f, 0.0f); glVertex2f(0.0f, 0.0f); glEnd(); SwapBuffers(hDC); The same tearing problem occurs regardless of how often the rendering function runs. Is there something I'm doing wrong or missing here? Thanks for any help.

    Read the article

  • Using GCC (MinGW) to compile OpenGL on Windows

    - by Casey
    I've searched on google and haven't been able to come up with a solution. I would like to compile some OpenGL programming using GCC. In the GL folder in GCC I have the following headers: gl.h glext.h glu.h Then in my system32 file I have the following .dll opengl32.dll glu32.dll glut32.dll If I wanted to write a simple OpenGL "Hello World" and link and compile with GCC, what is the correct process? I'm attempting to use this code: #include <GL/gl.h> #include <GL/glut.h> void display() { glClear(GL_COLOR_BUFFER_BIT); glFlush(); } int main(int argc, char **argv) { glutInit(&argc, argv); glutInitWindowSize(512,512); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutCreateWindow("The glut hello world program"); glutDisplayFunc(display); glClearColor(0.0, 0.0, 0.0, 1.0); glutMainLoop(); // Infinite event loop return 0; } Thank you in advance for the help.

    Read the article

  • OpenGL Colour Interpolation

    - by Will-of-fortune
    I'm currently working on an little project in C++ and OpenGL and am trying to implement a colour selection tool similar to that in photoshop, as below. However I am having trouble with interpolation of the large square. Working on my desktop computer with a 8800 GTS the result was similar but the blending wasn't as smooth. This is the code I am using: GLfloat swatch[] = { 0,0,0, 1,1,1, mR,mG,mB, 0,0,0 }; GLint swatchVert[] = { 400,700, 400,500, 600,500, 600,700 }; glVertexPointer(2, GL_INT, 0, swatchVert); glColorPointer(3, GL_FLOAT, 0, swatch); glDrawArrays(GL_QUADS, 0, 4); Moving onto my laptop with Intel Graphics HD 3000, this result was even worse with no change in code. I thought it was OpenGL splitting the quad into two triangles, so I tried rendering using triangles and interpolating the colour in the middle of the square myself but it still doesnt quite match the result I was hoping for. Any help would be appreciated. Thanks.

    Read the article

  • Enumerating pixel formats for adaptors and modes with OpenGL

    - by Robinson
    I'm trying to code an OpenGL path for my 3D engine. The D3D path enumerates all device adaptors, all modes (by mode I mean bit depth, dimensions, available windowed, and refresh rate) for each adaptor and then all pixel formats available for the given mode and adaptor, along side certain useful caps (shader version, filter types, etc.). So, I have broadly got the following protected functions in the class: // Enumerate all back/front buffer combinations. virtual void EnumerateBackFrontBufferCombinations(CComPtr<IDirect3D9>& d3d9); // Enumerate all depth/stencil formats. virtual void EnumerateDepthStencilFormats(CComPtr<IDirect3D9>& d3d9); // Enumerate all multi-sample formats. virtual void EnumerateMultiSampleTypes(CComPtr<IDirect3D9>& d3d9); // Enumerate all device formats, i.e. dynamic, static, render target, etc. virtual void EnumerateMapFormats(CComPtr<IDirect3D9>& d3d9); // Enumerate all capabilities. virtual void EnumerateCapabilities(CComPtr<IDirect3D9>& d3d9); The adaptors are enumerated with EnumDisplayDevices, the modes (resolutions and refresh rates) are enumerated with EnumDisplaySettings, so this can be done for either GL or D3D. The other functions I'm not so sure about with OpenGL. What are the equivalents to the IDirect3D9's CheckDeviceType, CheckDeviceFormat, CheckDeviceMultiSampleType, CheckDepthStencilMatch? I know I can use DescribePixelFormat, given a DC, but you kind-of need to have created the window before you can use a DC with it, but you can't create the window correctly until you know what formats you're going to use. Any tips you can give me? Thanks.

    Read the article

  • Rapid taps on an OpenGL ES app introducing input delay

    - by Tim R.
    I am starting out writing a 2D game in OpenGL ES, and I have encountered an odd problem: if I rapidly tap the touchscreen, the input starts lagging behind the display. The more times I tap, the more delay it causes between the input and any indication of that input onscreen. It only happens if I intentionally tap very rapidly, but not from tapping and dragging with any number of fingers. What could be causing this? Excessive details follow: Both accelerometer input and taps are delayed by just tapping. The only events I am responding to are touchesBegan (below) in my EAGLView and accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration in my Game object. There doesn't seem to be any upper limit to the amount of delay: I've gotten up to 12 seconds of delay by tapping rapidly with five fingers. I have not seen any drops in framerate (it stays constantly at 60 fps) in the OpenGL ES tool in Instruments or by taking 1/the time between updates. Possibly relevant code: - (void) drawView:(id) sender { [game update:allTouches]; [renderer render:game]; } -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { allTouches = [event allTouches]; } allTouches is a pointer that gets passed to my Game every update, which passes it to each GameObject in their update methods.

    Read the article

  • How do you use glFrustrum in OpenGL ES1 on iPhone

    - by Paul
    So I am using Xcode 3.2.1 and am trying to make an iPhone OpenGL ES1 project. The default template for an opengl project is ok, but I have been trying to split the code up so not everything is done per frame on the drawView() call. I have a seperate setupRC method that sets the lighting, turns on depth test, turns on culling and sets the clear color. This is called on the init of the EAGLView and this works just fine. I have took the glViewport() and glFrustrum() calls and put them at the end of the resizeFromLayer() method in the ES1Renderer.m file. This gets hit when the app starts and when the app gets resized as it should. Now the problem is the frustrum's far seems to be messed up, as in all my objects get cut / clipped off. I tried adjusting the camera position and angle and it still all objects are cut / clipped. I increased the far from 1000.0f to 30000.0f and still get the same result. What is crazy is that if i call both the glViewport() and glFrustrum() calls in the drawView() every frame everything looks right. Nothing is clipped and looks like i want it. From everything I've been reading the frustrum and viewport calls only need to be called when the window / gets made and resizes, but If I don't call it every frame in my project it doesn't work. Any ideas? Thanks In Advance

    Read the article

  • setting the openCV configuration in an openGL project produce several errors

    - by GolSa
    I have a win32 solution which is set for openGL; it works well; but I want to write a function which use functions of openCV; I set the configuration for openCV for both X86 and X64;;I commented the openCV function and just to test the correctness of configuration, I run it; but when I want to run it on X64 I faced with the error below: Error 1 error C2065: 'GWL_HINSTANCE' : undeclared identifier D:\matrix\matrixProjection\src\ControllerMain.cpp 35 1 matrixProjection Error 2 error C2664: 'CreateDialogParamW' : cannot convert parameter 4 from 'BOOL (__cdecl *)(HWND,UINT,WPARAM,LPARAM)' to 'DLGPROC' D:\matrix\matrixProjection\src\DialogWindow.cpp 47 1 matrixProjection Error 2 points to this line of code: HWND DialogWindow::create() { /*-->this line*/ handle = ::CreateDialogParam(instance, MAKEINTRESOURCE(id), parentHandle, Win::dialogProcedure, (LPARAM)controller); return handle; } but on Debug Win32 configure, it runs; I used openGL32 in my project; is there any probability to be the cause? is there any X64 version for openGL? I know that there is something needed in X64 mode which my solution can not handle it; I googled a lot about it but I did not find any solution; How can I solve that?

    Read the article

  • OpenGL Mipmapping: how does OpenGL decide on map level?

    - by Droozle
    Hi, I am having trouble implementing mipmapping in OpenGL. I am using OpenFrameworks and have modified the ofTexture class to support the creation and rendering of mipmaps. The following code is the original texture creation code from the class (slightly modified for clarity): glEnable(texData.textureTarget); glBindTexture(texData.textureTarget, (GLuint)texData.textureID); glTexSubImage2D(texData.textureTarget, 0, 0, 0, w, h, texData.glType, texData.pixelType, data); glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glDisable(texData.textureTarget); This is my version with mipmap support: glEnable(texData.textureTarget); glBindTexture(texData.textureTarget, (GLuint)texData.textureID); gluBuild2DMipmaps(texData.textureTarget, texData.glTypeInternal, w, h, texData.glType, texData.pixelType, data); glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glDisable(texData.textureTarget); The code does not generate errors (gluBuild2DMipmaps returns '0') and the textures are rendered without problems. However, I do not see any difference. The scene I render consists of "flat, square tiles" at z=0. It's basically a 2D scene. I zoom in and out by using "glScale()" before drawing the tiles. When I zoom out, the pixels of the tile textures start to "dance", indicating (as far as I can tell) unfiltered texture look-up. See: http://www.youtube.com/watch?v=b_As2Np3m8A at 25s. My question is: since I do not move the camera position, but only use scaling of the whole scene, does this mean OpenGL can not decide on the appropriate mipmap level and uses the full texture size (level 0)? Paul

    Read the article

  • Mysterious Flickering Visual Artifact

    - by Axis
    A flashing bar of red appears at the top of the EAGLView that I have added as a subview in my iPhone app. It flickers on and off (i.e., one frame it's there, the next frame it's not, the next frame it's there again). I have removed a lot of code from my app until I'm essentially left with the stock OpenGL-ES project and a few changes: The glview is not fullscreen; it's a subview. I enabled the depth buffer. I'm not even trying to draw anything. If the glview is fullscreen, or if I disable the depth buffer, then there is no flicker and it works fine. But needless to say, this is a 3D view and I'd like to be able to display it within a larger UIKit view. I'm not sure what code would be useful to post, but here's how I add the glview to my main view: appDelegate.glView.frame = CGRectMake(245, 65, 215, 215); [self.view addSubview:appDelegate.glView]; [appDelegate.glView startAnimation]; Here's my render function: - (void) render { [EAGLContext setCurrentContext:context]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer); glViewport(0, 0, backingWidth, backingHeight); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } It seems pretty obvious to me that the problem lies with the depth buffer somehow, but I'm not sure why. Also, it works fine in the simulator, but not on my iphone. I'm using iPhone OS 3.1. Any ideas on where to look for a problem?

    Read the article

  • Drawing a texture with an alpha channel doesn't work -- draws black

    - by DevDevDev
    I am modifying GLPaint to use a different background, so in this case it is white. Anyway the existing stamp they are using assumes the background is black, so I made a new background with an alpha channel. When I draw on the canvas it is still black, what gives? When I actually draw, I just bind the texture and it works. Something is wrong in this initialization. Here is the photo - (id)initWithCoder:(NSCoder*)coder { CGImageRef brushImage; CGContextRef brushContext; GLubyte *brushData; size_t width, height; if (self = [super initWithCoder:coder]) { CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = YES; // In this application, we want to retain the EAGLDrawable contents after a call to presentRenderbuffer. eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1]; if (!context || ![EAGLContext setCurrentContext:context]) { [self release]; return nil; } // Create a texture from an image // First create a UIImage object from the data in a image file, and then extract the Core Graphics image brushImage = [UIImage imageNamed:@"test.png"].CGImage; // Get the width and height of the image width = CGImageGetWidth(brushImage); height = CGImageGetHeight(brushImage); // Texture dimensions must be a power of 2. If you write an application that allows users to supply an image, // you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2. // Make sure the image exists if(brushImage) { brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte)); brushContext = CGBitmapContextCreate(brushData, width, width, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage); CGContextRelease(brushContext); glGenTextures(1, &brushTexture); glBindTexture(GL_TEXTURE_2D, brushTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData); free(brushData); } //Set up OpenGL states glMatrixMode(GL_PROJECTION); CGRect frame = self.bounds; glOrthof(0, frame.size.width, 0, frame.size.height, -1, 1); glViewport(0, 0, frame.size.width, frame.size.height); glMatrixMode(GL_MODELVIEW); glDisable(GL_DITHER); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA); glEnable(GL_POINT_SPRITE_OES); glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE); glPointSize(width / kBrushScale); } return self; }

    Read the article

  • drawing thick, textured lines in OpenGL

    - by NateS
    I need to draw thick textured line segments in OpenGL. Actually I need curves made out of short line segments. Here is what I have: In the upper left is an example of two connected line segments. The second image shows once the lines are given width, they overlap. If I apply a texture that uses translucency, the overlap looks terrible. The third image shows that both lines are shortened by half the amount necessary to make the thick line corners just touch. This way I can fill the space between the lines with a triangle. On the right you can see this works well (ignore the horizontal line when the crappy texture repeats). But it doesn't always work well. In the bottom left the curve is made of many short line segments. Note the incorrect texture application. My program is written in Java, making use of the LWJGL OpenGL binding (and minor use of Slick, a 2D helper framework). I've made a zip file that contains an executable JAR so you can easily see the problem. It also has the Java code (there is only one source file) and an Eclipse project, so you can instantly run it through Eclipse and hack at it if you like. Here she is: http://n4te.com/temp/lines.zip To run, execute "java -jar lines.jar". You may need "-Djava.library.path=." before -jar if you are not on Windows. Press space to toggle texture/wireframe. The wireframe only shows the line segments, the triangle between them isn't drawn. I don't need to draw arbitrary lines, just bezier curves similar to what you see in the program. Sorry the code is a bit messy, once I have a solution I will refactor. I have investigated using GLUtessellator. It greatly simplified construction of the line, but I found that applying the texture was perfect. It worked most of the time (top image below), but long vertical curves would have severe texture distortion (bottom image below): This turned out to be much easier to code, but in the end worse than my approach. I believe what I'm trying to do is called "line tessellation" or "stroke tessellation". I assume this has been solved already? Is there standard code I can leverage? Otherwise, how can I fix my code so that the texture does not freak out on short, vertical curves?

    Read the article

  • Pyglet OpenGL drawing anti-aliasing

    - by Jared Forsyth
    I've been looking around for a way to anti-alias lines in OpenGL, but none of them seem to work... here's some example code: import pyglet from pyglet.gl import * window = pyglet.window.Window(resizable=True) @window.event def on_draw(): window.clear() pyglet.gl.glColor4f(1.0,0,0,1.0) glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) glEnable (GL_BLEND) glEnable (GL_LINE_SMOOTH); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE) glLineWidth (3) pyglet.graphics.draw(2, pyglet.gl.GL_LINES, ('v2i', (10, 15, 300, 305)) ) pyglet.app.run() Can anyone see what I am doing wrong?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >