Search Results

Search found 3627 results on 146 pages for 'opengl es 2 0'.

Page 25/146 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • How to merge two FBOs?

    - by DevDevDev
    OK so I have 4 buffers, 3 FBOs and a render buffer. Let me explain. I have a view FBO, which will store the scene before I render it to the render buffer. I have a background buffer, which contains the background of the scene. I have a user buffer, which the user manipulates. When the user makes some action I draw to the user buffer, using some blending. Then to redraw the whole scene what I want to do is clear the view buffer, draw the background buffer to the view buffer, change the blending, then draw the user buffer to the view buffer. Finally render the view buffer to the render buffer. However I can't figure out how to draw a FBO to another FBO. What I want to do is essentially merge and blend two FBOs, but I can't figure out how! I'm very new to OpenGL ES, so thanks for all the help.

    Read the article

  • Le Khronos Group publie les spécifications de OpenGL 3.3 et 4.0

    Le Khronos Group publie les spécifications de OpenGL 3.3 et 4.0 Déjà deux ans après la sortie d'OpenGl 3.x, le Khronos Group nous offre le même jour les spécifications des nouvelles versions d'OpenGL : La version 3.3 et la version 4.0 Pour ces nouvelles versions la séparation Core et Compatibility demeurent et, nouveauté pour le GLSL, les versions ont dorénavant le même nom que la version de l'API sous laquelle elles ont été sortis. On nous promet aussi une version 4.0 optimisée, moins dépendante du CPU, notamment concernant la tesselation... N'étant pas familier a OpenGL je n'oserais en dire plus pour les plus curieux voici le lien :

    Read the article

  • Tutoriel OpenGL Moderne : shaders de base, apprenez à utiliser des objets transparents dans vos scènes en OpenGL 3 et supérieur

    Bonjour à tous,La rubrique 2D/3D/Jeux est heureuse de vous présenter une la suite de la série de tutoriels consacrée à OpenGL moderne (les versions à partir d'OpenGL 3.3). Ces tutoriels vous permettront d'intégrer facilement les nouveaux concepts d'OpenGL afin de profiter au maximum des dernières technologies de vos cartes graphiques. Ce dixième tutoriel vous apprendra à mettre en place la transparance dans vos applications OpenGL.Bonne lecture.

    Read the article

  • Tutoriel OpenGL Moderne : texte 2D, affichez des textes en 2D dans votre application OpenGL 3 et supérieur

    Bonjour à tous,La rubrique 2D/3D/Jeux est heureuse de vous présenter une la suite de la série de tutoriels consacrée à OpenGL moderne (les versions à partir d'OpenGL 3.3). Ces tutoriels vous permettront d'intégrer facilement les nouveaux concepts d'OpenGL afin de profiter au maximum des dernières technologies de vos cartes graphiques. Ce onzième tutoriel vous apprendra à afficher du texte en 2D dans vos applications OpenGL.Bonne lecture.

    Read the article

  • Tutoriel OpenGL Moderne : billboards, incrustez des éléments 3D dans votre monde 3D en OpenGL 3 et supérieur

    Bonjour à tous,La rubrique 2D/3D/Jeux est heureuse de vous présenter la suite de la série de tutoriels consacrée à OpenGL moderne (les versions à partir d'OpenGL 3.3). Ces tutoriels vous permettront d'intégrer facilement les nouveaux concepts d'OpenGL afin de profiter au maximum des dernières technologies de vos cartes graphiques. Ce dix-huitième tutoriel vous expliquera comment afficher des billboards (des éléments 2D incrustés dans un monde 3D) en OpenGL.Bonne lecture.Voir aussiToutes les ressources...

    Read the article

  • Why do my 512x512 bitmaps look jaggy on Android OpenGL?

    - by Milo Mordaunt
    This is sort of driving me nuts, I've googled and googled and tried everything I can think of, but my sprites still look super blurry and super jaggy. Example: Here: https://docs.google.com/open?id=0Bx9Gbwnv9Hd2TmpiZkFycUNmRTA If you click through to the actual full size image you should see what I mean, it's like it's taking and average of every 5*5 pixels or something, the background looks really blurry and blocky, but the ball is the worst. The clouds look all right for some reason, probably because they're mostly transparent. I know the pngs aren't top notch themselves but hey, I'm no artist! I would imagine it's a problem with either: a. How the pngs are made example sprite (512x512): https://docs.google.com/open?id=0Bx9Gbwnv9Hd2a2RRQlJiQTFJUEE b. How my Matrices work This is the relevant parts of the renderer: public void onDrawFrame(GL10 unused) { if(world != null) { dt = System.currentTimeMillis() - endTime; world.update( (float) dt); // Redraw background color GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); Matrix.setIdentityM(mvMatrix, 0); Matrix.translateM(mvMatrix, 0, 0f, 0f, 0f); world.draw(mvMatrix, mProjMatrix); endTime = System.currentTimeMillis(); } else { Log.d(TAG, "There is no world...."); } } public void onSurfaceChanged(GL10 unused, int width, int height) { GLES20.glViewport(0, 0, width, height); Matrix.orthoM(mProjMatrix, 0, 0, width /2, 0, height /2, -1.f, 1.f); } And this is what each Quad does when draw is called: public void draw(float[] mvMatrix, float[] pMatrix) { Matrix.setIdentityM(mMatrix, 0); Matrix.setIdentityM(mvMatrix, 0); Matrix.translateM(mMatrix, 0, xPos, yPos, 0.f); Matrix.multiplyMM(mvMatrix, 0, mvMatrix, 0, mMatrix, 0); Matrix.scaleM(mvMatrix, 0, scale, scale, 0f); Matrix.rotateM(mvMatrix, 0, angle, 0f, 0f, -1f); GLES20.glUseProgram(mProgram); posAttr = GLES20.glGetAttribLocation(mProgram, "vPosition"); texAttr = GLES20.glGetAttribLocation(mProgram, "aTexCo"); uSampler = GLES20.glGetUniformLocation(mProgram, "uSampler"); int alphaHandle = GLES20.glGetUniformLocation(mProgram, "alpha"); GLES20.glVertexAttribPointer(posAttr, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, 0, vertexBuffer); GLES20.glVertexAttribPointer(texAttr, 2, GLES20.GL_FLOAT, false, 0, texCoBuffer); GLES20.glEnableVertexAttribArray(posAttr); GLES20.glEnableVertexAttribArray(texAttr); GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture); GLES20.glUniform1i(uSampler, 0); GLES20.glUniform1f(alphaHandle, alpha); mMVMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVMatrix"); mPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uPMatrix"); GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mvMatrix, 0); GLES20.glUniformMatrix4fv(mPMatrixHandle, 1, false, pMatrix, 0); GLES20.glDrawElements(GLES20.GL_TRIANGLE_STRIP, 4, GLES20.GL_UNSIGNED_SHORT, indicesBuffer); GLES20.glDisableVertexAttribArray(posAttr); GLES20.glDisableVertexAttribArray(texAttr); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0); } c. How my texture loading/blending/shaders setup works Here is the renderer setup: public void onSurfaceCreated(GL10 unused, EGLConfig config) { // Set the background frame color GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); GLES20.glDisable(GLES20.GL_DEPTH_TEST); GLES20.glDepthMask(false); GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA); GLES20.glEnable(GLES20.GL_BLEND); GLES20.glEnable(GLES20.GL_DITHER); } Here is the vertex shader: attribute vec4 vPosition; attribute vec2 aTexCo; varying vec2 vTexCo; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; void main() { gl_Position = uPMatrix * uMVMatrix * vPosition; vTexCo = aTexCo; } And here's the fragment shader: precision mediump float; uniform sampler2D uSampler; uniform vec4 vColor; varying vec2 vTexCo; varying float alpha; void main() { vec4 color = texture2D(uSampler, vec2(vTexCo)); gl_FragColor = color; if(gl_FragColor.a == 0.0) { "discard; } } This is how textures are loaded: private int loadTexture(int rescource) { int[] texture = new int[1]; BitmapFactory.Options opts = new BitmapFactory.Options(); opts.inScaled = false; Bitmap temp = BitmapFactory.decodeResource(context.getResources(), rescource, opts); GLES20.glGenTextures(1, texture, 0); GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture[0]); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR); GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, temp, 0); GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0); temp.recycle(); return texture[0]; } I'm sure I'm doing about 20,000 things wrong, so I'm really sorry if the problem is blindingly obvious... The test device is a Galaxy Note, running a JellyBean custom ROM, if that matters at all. So the screen resolution is 1280x800, which means... The background is 1024x1024, so yeah it might be a little blurry, but shouldn't be made of lego. Thank you so much, any answer at all would be appreciated.

    Read the article

  • Cannot get 3D OpenGL support in Vmware guests, how can I fix this?

    - by jjapol
    I have been working at this problem for 2 days now. I cannot for the life of me enable 3D support in VMWare 9 guests. My specifications are: Hardware: Dell Latitude E5520 laptop. Processor: Intel i7-2620M CPU @ 2.70GHz × 4. Memory: 8GB. Video: Intel Sandybridge Mobile x86/MMX/SSE2 OS: Ubuntu 12.04.1 LTS, 32 bit. Vmware Workstation: 9.0.1 build-894247 Glxgears functions fine. Frame rate is ~60fps. Vmware guest: Windows 7 Starting the Windows 7 guest in VMware throws the following errors: No 3D support is available from the host. and Hardware graphics acceleration is not available. I've read through this VMware forum thread, but again the hardware in the post is different (nVidia). I've followed the instructions at this Ask Ubuntu post as closely as possible as the question is nearly the same as mine although my hardware is different. Answer 1 regarding setting mks.gl.allowBlacklistedDrivers = TRUE; in my vmx configuration file causes the VM to crash when it starts. The second answer I followed as closely as possible. I uninstalled VMware, Did sudo apt-get install build-essential linux-headers-$(uname -r) at a terminal, Added the PPA https://launchpad.net/~glasen/+archive/intel-driver, Then at a terminal did sudo apt-get update && sudo apt-get upgrade -y I reinstalled VMware and have the same results: no 3D in guests. I'm getting the feeling that something is awry with the Sandy Bridge driver, but I can't seem to come up with any solutions. Has anyone out there run across this problem also? By the way, the operation of the likes of Solidworks and AutoCad within a Windpws 7 guest does appear to be improved in VMware 9 vs VMware 8 in spite of the fact that 3D support is lacking in the Windows 7 guest. I'd also add that my glxinfo file was nearly identical to the glxinfo file posted at askubuntu.com/questions/181829/…. I had a total of seven minor differences per a comparison using Meld. –

    Read the article

  • What is the practical use of IBOs / degenerate vertex in OpenGL?

    - by 0xFAIL
    Vertices in 3D models CAN get cut in the process of optimizing 3D geometry, (degenerate vertices) by 3D graphics software (Blender, ...) when exporting because they aren't needed when reusing a vertex for multiple triangles. (In the current case 3D data is exported from Blender as .ply and read by a simple application that displays the 3D model) Every vertex has a few attributes like position, color, normal, tangent,... But the data for each vertex that is cut through the vertex sharing is lost and is missing in the vertex shader. Modern shader techniques like Bump or Normal mapping require normals/tangents per vertex which are also cut. To use complex shader techniques IBOs must not be used? Or is there a way to use IBOs and retain the data per vertex that was origionally lost?

    Read the article

  • how to use opengl blend mode/functions to brighten/darken a texture.

    - by Jigar
    Tried this code, but the texture didnot get any lighter. try { texture = TextureLoader.getTexture("png", Game.class.getResourceAsStream("/brick.png"), true, GL_NEAREST); } catch (IOException e) { e.printStackTrace(); } GL11.glBindTexture(GL11.GL_TEXTURE_2D, texture.getTextureID()); glEnable(GL_BLEND); glBlendFunc(GL_CONSTANT_ALPHA, GL_CONSTANT_ALPHA); GL14.glBlendColor(1.0f, 1.0f, 1.0f, 0.5f); glColor4f(1, 1, 1, 0.5f); GL11.glBegin(GL11.GL_QUADS); // Start Drawing Quads // Front Face GL11.glNormal3f(0.0f, 0.0f, 1.0f); // Normal Pointing Towards Viewer GL11.glTexCoord2f(0.0f, 0.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); // Point 1 (Front) GL11.glTexCoord2f(1.0f, 0.0f); GL11.glVertex3f(1.0f, -1.0f, 1.0f); // Point 2 (Front) GL11.glTexCoord2f(1.0f, 1.0f); GL11.glVertex3f(1.0f, 1.0f, 1.0f); // Point 3 (Front) GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(-1.0f, 1.0f, 1.0f); // Point 4 (Front) glEnd();

    Read the article

  • Simplest way to render image over top of another with another image used as mask in OpenGL?

    - by Adam Naylor
    The effect I'm looking for is to have a single large background image that is always visible (at full alpha) and then show a second image (what I call a light map or specular map) that is partially shown over the top based on a third image (which is effectively a mask). The effect is similar to this effect except instead of simply darkening or lightening the background image using the third image it needs to mask the second without effecting the first at all. The third image is the only one that moves therefore hard baking the third images alpha into the second image isn't an option. If my explanation isn't clear I'll provide visual examples when I have more time. I'd prefer not to go down a shader route as I haven't taught myself this area yet so unless I have too I'd rather try to achieve this with simple alpha blending. Happy to use a shader approach. Cheers. Additional These third images are obviously light sources being cast onto the first image showing the specular information from the second image to simulate the light 'shining' off the objects in the first image. The solution I implement will need to allow two light sources to potentially overlap so my current thoughts are that the alpha values of the two images will need to be combined (Added?) to produce a final image which masks the second image? Don't worry about things like coloured lights. For this technique the lights are all considered white.

    Read the article

  • How do I position a 2D camera in OpenGL?

    - by Elfayer
    I can't understand how the camera is working. It's a 2D game, so I'm displaying a game map from (0, 0, 0) to (mapSizeX, 0, mapSizeY). I'm initializing the camera as follow : Camera::Camera(void) : position_(0.0f, 0.0f, 0.0f), rotation_(0.0f, 0.0f, -1.0f) {} void Camera::initialize(void) { glMatrixMode(GL_PROJECTION); glLoadIdentity(); glTranslatef(position_.x, position_.y, position_.z); gluPerspective(70.0f, 800.0f/600.0f, 1.0f, 10000.0f); gluLookAt(0.0f, 6000.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); } So the camera is looking down. I currently see the up right border of the map in the center of my window and the map expand to the down left border of my window. I would like to center the map. The logical thing to do should be to move the camera to eyeX = mapSizeX / 2 and the same for z. My map has 10 x 10 cases with CASE = 400, so I should have : gluLookAt((10 / 2) * CASE /* = 2000 */, 6000.0f, (10 / 2) * CASE /* = 2000 */, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f); But that doesn't move the camera, but seems to rotate it. Am I doing something wrong? EDIT : I tried that: gluLookAt(2000.0f, 6000.0f, 0.0f, 2000.0f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f); Which correctly moves the map in the middle of the window in width. But I can't move if correctly in height. It always returns the axis Z. When I go up, It goes down and the same for right and left. I don't see the map anymore when I do : gluLookAt(2000.0f, 6000.0f, 2000.0f, 2000.0f, 0.0f, 2000.0f, 0.0f, 1.0f, 0.0f);

    Read the article

  • Why does OpenGL seem to ignore my glBindTexture call?

    - by Killrazor
    I'm having problems making a simple sprite rendering. I load 2 different textures. Then, I bind these textures and draw 2 squares, one with each texture. But only the texture of the first rendered object is drawn in both squares. Its like if I'd only use a texture or as if glBindTexture don't work properly. I know that GL is a state machine, but I think that you only need to change active texture with glBindTexture. I load texture with this method: bool CTexture::generate( utils::CImageBuff* img ) { assert(img); m_image = img; CHECKGL(glGenTextures(1,&m_textureID)); CHECKGL(glBindTexture(GL_TEXTURE_2D,m_textureID)); CHECKGL(glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR)); CHECKGL(glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR)); //CHECKGL(glTexImage2D(GL_TEXTURE_2D,0,img->getBpp(),img->getWitdh(),img->getHeight(),0,img->getFormat(),GL_UNSIGNED_BYTE,img->getImgData())); CHECKGL(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img->getWitdh(), img->getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img->getImgData())); return true; } And I bind textures with this function: void CTexture::bind() { CHECKGL(glBindTexture(GL_TEXTURE_2D,m_textureID)); } Also, I draw sprites with this method void CSprite2D::render() { CHECKGL(glLoadIdentity()); CHECKGL(glEnable(GL_TEXTURE_2D)); CHECKGL(glEnable(GL_BLEND)); CHECKGL(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); m_texture->bind(); CHECKGL(glPushMatrix()); CHECKGL(glBegin(GL_QUADS)); CHECKGL(glTexCoord2f(m_textureAreaStart.s,m_textureAreaStart.t)); // 0,0 by default CHECKGL(glVertex3i(m_position.x,m_position.y,0)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s,m_textureAreaStart.t)); // 1,0 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y, 0)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s, m_textureAreaEnd.t)); // 1,1 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y + m_dimensions.y, 0)); CHECKGL(glTexCoord2f(m_textureAreaStart.s, m_textureAreaEnd.t)); // 0,1 by default CHECKGL(glVertex3i( m_position.x, m_position.y + m_dimensions.y,0)); CHECKGL(glPopMatrix()); CHECKGL(glDisable(GL_BLEND)); } Edit: I bring also the check error code: int CheckGLError(const char *GLcall, const char *file, int line) { GLenum errCode; //avoids infinite loop int errorCount = 0; while ( (errCode=glGetError()) != GL_NO_ERROR && ++errorCount < 3000) { utils::globalLogPtr log = utils::CGLogFactory::getLogInstance(); const GLubyte *errString; errString = gluErrorString(errCode); std::stringstream ss; ss << "In "<< __FILE__<<"("<< __LINE__<<") "<<"GL error with code: " << errCode<<" at file " << file << ", line " << line << " with message: " << errString << "\n"; log->addMessage(ss.str(),ZEL_APPENDER_GL,utils::LOGLEVEL_ERROR); } return 0; }

    Read the article

  • What techniques can I use to render very large numbers of objects more efficiently in OpenGL?

    - by Luke
    You can think of my application as drawing a very large ball-and-stick diagram (or graph). At times, this graph can get very large, where the number of elements even outnumbers the pixels on the screen. Currently I am simply passing all of my textures (as GL_POINTS) and lines to the graphics card using VBO's. When the number of elements outnumbers the number of pixels, is this the most efficient way to do this? Or should I do some calculations on the CPU side before handing everything over to the GPU? If it matters, I do use GL_DEPTH_TEST and GL_ALPHA_TEST. I do some alpha blending, but probably not enough to make a huge performance difference. My scene can be static at times, but the user has control over a typical arc-ball camera and can pan, rotate, or zoom. It is during these operations that performance degradation is noticeable.

    Read the article

  • Understanding how to go from a scene to what's actually rendered to screen in OpenGL?

    - by Pris
    I want something that explains step by step how, after setting up a simple scene I can go from that 'world' space, to what's finally rendered on my screen (ie, actually implement something). I need the resource to clearly show how to derive and set up both orthographic and perspective projection matrices... basically I want to thoroughly understand what's going on behind the scenes and not plug in random things without knowing what they do. I've found lots of half explanations, presentation slides, walls of text, etc that aren't really doing much for me. I have a basic understanding of linear algebra/matrix transforms, and a rough idea of what's going on when you go from model space - screen, but not enough to actually implement it in code.

    Read the article

  • How can I get the camera to follow a moving object from behind in C++ and openGL [closed]

    - by user1324894
    I am trying to get the camera to follow an object that moves around my environment using the gluLookAt function. This is my code for the object moving in the direction that it faces: Xtri += -Vtri*cos((90+heading)*(PI/180.0f)); Ztri += Vtri*sin((90+heading)*(PI/180.0f)); I then render the object: glPushMatrix(); glTranslatef(Xtri,0,Ztri); glRotatef(heading,0,1,0); drawTriangle(); glPopMatrix(); All heading is is a spin variable so that if I press left or right it spins in that direction. When you press up on the arrows it moves forward and if you press down it moves backwards in the direction that it is facing. To try and get it so the camera follows I am using the gluLookAt function like this: gluLookAt(Xtri,0,(Ztri+20), Xtri,0,Ztri, 0,1,0); So that it follows the car from a distance and should follow it around. However, the object doesn't even move at all now all it can do is rotate still but not move forwards or backwards and when it spins it doesn't follow the spin instead it just watches it turn still fixed to the same position. Where is it that I am going wrong? UPDATE: I have updated the gluLookAt function so now it is: gluLookAt((Xtri+Vtri),0,((Ztri+20)), (Xtri+Vtri),0,(Ztri), 0,1,0); This seems to move the object around. I have a stationary terrain so I can see that the object is now moving and in the direction that it is facing. However, I want the camera to follow the object when it spins as well so it is always viewing the object from behind.

    Read the article

  • OpenGL behaving strangely

    - by Mk12
    OpenGL is acting very strangely for some reason. In my subclass of NSOpenGLView, I have the following code in -prepareOpenGL: - (void)prepareOpenGL { GLfloat lightAmbient[] = { 0.5f, 0.5f, 0.5f, 1.0f }; GLfloat lightDiffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f }; GLfloat lightPosition[] = { 0.0f, 0.0f, 2.0f }; quality = 0; zCoord = -6; [self loadTextures]; glEnable(GL_LIGHTING); glEnable(GL_TEXTURE_2D); glShadeModel(GL_SMOOTH); glClearColor(0.2f, 0.2f, 0.2f, 0.0f); glClearDepth(1.0f); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glLightfv(GL_LIGHT1, GL_AMBIENT, lightAmbient); glLightfv(GL_LIGHT1, GL_DIFFUSE, lightDiffuse); glLightfv(GL_LIGHT1, GL_POSITION, lightPosition); glEnable(GL_LIGHT1); gameState = kGameStateRunning; int i = 0; // HERE ******** [NSTimer scheduledTimerWithTimeInterval:0.03f target:self selector:@selector(processKeys) userInfo:nil repeats:YES]; // Synchronize buffer swaps with vertical refresh rate GLint swapInt = 1; [[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval]; // Setup and start displayLink [self setupDisplayLink]; } I wanted to assign the timer that processes key input to an ivar so that I could invalidate it when the game is paused (and reinstantiate it on resume), however when I did that (as apposed to leaving it at [NSTimer scheduledTimer…), OpenGL doesn't display the cube I draw. When I take it away, it's fine. So i tried just adding a harmless statement, int i = 0; (maked // HERE *******), and that makes the lighting not work! When I take it away, everything is fine, but when I put it back, everything is dark. Can someone come up with a rational explanation for this? Thanks.

    Read the article

  • Creating spotlight in OpenGL scene

    - by Victor Oliveira
    Im studying OpenGL and trying to create a spot light at my application. The code that Im using for my #vertex-shader is below: #:vertex-shader #{ #version 150 core in vec3 in_pos; in vec2 in_tc; out vec2 tc; glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 20.0f); GLfloat spot_direction[] = { -1.0, -1.0, 0.0 }; glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, spot_direction); glEnable(GL_LIGHT0); void main() { vec4 pos= vec4(vec3(1.0)*in_pos - vec3(1.0), 1.0); pos.z=0.0; gl_Position = pos; tc = in_tc; } } The thing is, everytime Im trying to run the code an Error that says: Type: other, Source: api, ID: 131169, Severity: low Message: Framebuffer detailed info: The driver allocated storage for renderbuffer 1. len = 157, written = 0 failed to compile vertex shader of deferred: directional info log for shader deferred: directional vertex info log for shader deferred: directional: ERROR: Unbound variable: when Specifications: Renderer: GeForce GTX 580/PCIe/SSE2 Version: 3.3.0 NVIDIA 319.17 GLSL: 3.30 NVIDIA via Cg compiler Status: Using GLEW 1.9.0 1024 x 768 OS: Linux debian I guess to create this spotlight is pretty much simple, but since Im really new to OpenGL I dont have a clue how to do it until now, even reading sources like: http://www.glprogramming.com/red/chapter05.html#name3 Read also in some place that light spots can get really hard to understand, but I cant avoid this step right now since Im following my lecture schedule. Could anybody help me?

    Read the article

  • Porting WebGL game to iPhone's native OpenGL?

    - by ArtPulse
    We are developing a web game that uses WebGL for the two biggest parts of it. Working with HTML / CSS was too slow and too limited, so it's off the table. Thing is, iOS does not support WebGL publicly just yet, only on iAd. It is my guess Apple will eventually support it once the security issues they and Microsoft claim it has are fixed, and looks stable enough. Problem is, if Apple does not do this by the release of the next mayor iOS version, then we will have in our hands a mobile WebGL game that does not run. 6 months of development and testing to waste. So, questions: If that was the case, how viable (regarding amount of time) is it porting the WebGL part of the game to native iPhone OpenGL? I'm afraid that porting will take longer than the development of the game itself. I saw posts on Stack Overflow (like this) that suggested, on Android, adding the OpenGL interface manually to a WebKit element. It'd be slower than native. But either way... Is this something that could be accepted in the AppStore? Apple is very restrictive with these kind of stuff... Thank you all for your time!

    Read the article

  • OpenGL Tearing Problem

    - by kaykun
    Hi, I'm using win32 and opengl and I have a window set up with the projection at glOrtho of the window's coordinates. I have double buffering enabled, tested it with glGet as well. My program always seems to tear any primitives that I try to draw on it if it's being constantly translated. Here is my OpenGL initialization function: glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glViewport(0, 0, 640, 480); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, 640, 0, 480, 0, 100); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glDrawBuffer(GL_BACK); glLoadIdentity(); And this is my rendering function, gMouseX and gMouseY are the coordinates of the mouse: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glTranslatef(gMouseX, gMouseY, 0.0f); glColor3f(0.5f, 0.5f, 0.5f); glBegin(GL_TRIANGLES); glVertex2f(0.0f, 128.0f); glVertex2f(128.0f, 0.0f); glVertex2f(0.0f, 0.0f); glEnd(); SwapBuffers(hDC); The same tearing problem occurs regardless of how often the rendering function runs. Is there something I'm doing wrong or missing here? Thanks for any help.

    Read the article

  • Using GCC (MinGW) to compile OpenGL on Windows

    - by Casey
    I've searched on google and haven't been able to come up with a solution. I would like to compile some OpenGL programming using GCC. In the GL folder in GCC I have the following headers: gl.h glext.h glu.h Then in my system32 file I have the following .dll opengl32.dll glu32.dll glut32.dll If I wanted to write a simple OpenGL "Hello World" and link and compile with GCC, what is the correct process? I'm attempting to use this code: #include <GL/gl.h> #include <GL/glut.h> void display() { glClear(GL_COLOR_BUFFER_BIT); glFlush(); } int main(int argc, char **argv) { glutInit(&argc, argv); glutInitWindowSize(512,512); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutCreateWindow("The glut hello world program"); glutDisplayFunc(display); glClearColor(0.0, 0.0, 0.0, 1.0); glutMainLoop(); // Infinite event loop return 0; } Thank you in advance for the help.

    Read the article

  • OpenGL Colour Interpolation

    - by Will-of-fortune
    I'm currently working on an little project in C++ and OpenGL and am trying to implement a colour selection tool similar to that in photoshop, as below. However I am having trouble with interpolation of the large square. Working on my desktop computer with a 8800 GTS the result was similar but the blending wasn't as smooth. This is the code I am using: GLfloat swatch[] = { 0,0,0, 1,1,1, mR,mG,mB, 0,0,0 }; GLint swatchVert[] = { 400,700, 400,500, 600,500, 600,700 }; glVertexPointer(2, GL_INT, 0, swatchVert); glColorPointer(3, GL_FLOAT, 0, swatch); glDrawArrays(GL_QUADS, 0, 4); Moving onto my laptop with Intel Graphics HD 3000, this result was even worse with no change in code. I thought it was OpenGL splitting the quad into two triangles, so I tried rendering using triangles and interpolating the colour in the middle of the square myself but it still doesnt quite match the result I was hoping for. Any help would be appreciated. Thanks.

    Read the article

  • Enumerating pixel formats for adaptors and modes with OpenGL

    - by Robinson
    I'm trying to code an OpenGL path for my 3D engine. The D3D path enumerates all device adaptors, all modes (by mode I mean bit depth, dimensions, available windowed, and refresh rate) for each adaptor and then all pixel formats available for the given mode and adaptor, along side certain useful caps (shader version, filter types, etc.). So, I have broadly got the following protected functions in the class: // Enumerate all back/front buffer combinations. virtual void EnumerateBackFrontBufferCombinations(CComPtr<IDirect3D9>& d3d9); // Enumerate all depth/stencil formats. virtual void EnumerateDepthStencilFormats(CComPtr<IDirect3D9>& d3d9); // Enumerate all multi-sample formats. virtual void EnumerateMultiSampleTypes(CComPtr<IDirect3D9>& d3d9); // Enumerate all device formats, i.e. dynamic, static, render target, etc. virtual void EnumerateMapFormats(CComPtr<IDirect3D9>& d3d9); // Enumerate all capabilities. virtual void EnumerateCapabilities(CComPtr<IDirect3D9>& d3d9); The adaptors are enumerated with EnumDisplayDevices, the modes (resolutions and refresh rates) are enumerated with EnumDisplaySettings, so this can be done for either GL or D3D. The other functions I'm not so sure about with OpenGL. What are the equivalents to the IDirect3D9's CheckDeviceType, CheckDeviceFormat, CheckDeviceMultiSampleType, CheckDepthStencilMatch? I know I can use DescribePixelFormat, given a DC, but you kind-of need to have created the window before you can use a DC with it, but you can't create the window correctly until you know what formats you're going to use. Any tips you can give me? Thanks.

    Read the article

  • How do you use glFrustrum in OpenGL ES1 on iPhone

    - by Paul
    So I am using Xcode 3.2.1 and am trying to make an iPhone OpenGL ES1 project. The default template for an opengl project is ok, but I have been trying to split the code up so not everything is done per frame on the drawView() call. I have a seperate setupRC method that sets the lighting, turns on depth test, turns on culling and sets the clear color. This is called on the init of the EAGLView and this works just fine. I have took the glViewport() and glFrustrum() calls and put them at the end of the resizeFromLayer() method in the ES1Renderer.m file. This gets hit when the app starts and when the app gets resized as it should. Now the problem is the frustrum's far seems to be messed up, as in all my objects get cut / clipped off. I tried adjusting the camera position and angle and it still all objects are cut / clipped. I increased the far from 1000.0f to 30000.0f and still get the same result. What is crazy is that if i call both the glViewport() and glFrustrum() calls in the drawView() every frame everything looks right. Nothing is clipped and looks like i want it. From everything I've been reading the frustrum and viewport calls only need to be called when the window / gets made and resizes, but If I don't call it every frame in my project it doesn't work. Any ideas? Thanks In Advance

    Read the article

  • setting the openCV configuration in an openGL project produce several errors

    - by GolSa
    I have a win32 solution which is set for openGL; it works well; but I want to write a function which use functions of openCV; I set the configuration for openCV for both X86 and X64;;I commented the openCV function and just to test the correctness of configuration, I run it; but when I want to run it on X64 I faced with the error below: Error 1 error C2065: 'GWL_HINSTANCE' : undeclared identifier D:\matrix\matrixProjection\src\ControllerMain.cpp 35 1 matrixProjection Error 2 error C2664: 'CreateDialogParamW' : cannot convert parameter 4 from 'BOOL (__cdecl *)(HWND,UINT,WPARAM,LPARAM)' to 'DLGPROC' D:\matrix\matrixProjection\src\DialogWindow.cpp 47 1 matrixProjection Error 2 points to this line of code: HWND DialogWindow::create() { /*-->this line*/ handle = ::CreateDialogParam(instance, MAKEINTRESOURCE(id), parentHandle, Win::dialogProcedure, (LPARAM)controller); return handle; } but on Debug Win32 configure, it runs; I used openGL32 in my project; is there any probability to be the cause? is there any X64 version for openGL? I know that there is something needed in X64 mode which my solution can not handle it; I googled a lot about it but I did not find any solution; How can I solve that?

    Read the article

  • OpenGL Mipmapping: how does OpenGL decide on map level?

    - by Droozle
    Hi, I am having trouble implementing mipmapping in OpenGL. I am using OpenFrameworks and have modified the ofTexture class to support the creation and rendering of mipmaps. The following code is the original texture creation code from the class (slightly modified for clarity): glEnable(texData.textureTarget); glBindTexture(texData.textureTarget, (GLuint)texData.textureID); glTexSubImage2D(texData.textureTarget, 0, 0, 0, w, h, texData.glType, texData.pixelType, data); glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glDisable(texData.textureTarget); This is my version with mipmap support: glEnable(texData.textureTarget); glBindTexture(texData.textureTarget, (GLuint)texData.textureID); gluBuild2DMipmaps(texData.textureTarget, texData.glTypeInternal, w, h, texData.glType, texData.pixelType, data); glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glDisable(texData.textureTarget); The code does not generate errors (gluBuild2DMipmaps returns '0') and the textures are rendered without problems. However, I do not see any difference. The scene I render consists of "flat, square tiles" at z=0. It's basically a 2D scene. I zoom in and out by using "glScale()" before drawing the tiles. When I zoom out, the pixels of the tile textures start to "dance", indicating (as far as I can tell) unfiltered texture look-up. See: http://www.youtube.com/watch?v=b_As2Np3m8A at 25s. My question is: since I do not move the camera position, but only use scaling of the whole scene, does this mean OpenGL can not decide on the appropriate mipmap level and uses the full texture size (level 0)? Paul

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >