Search Results

Search found 3627 results on 146 pages for 'opengl es 1 1'.

Page 58/146 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • How can I support scrolling when using batched rendering for my tiles?

    - by dardanel
    I have tiled map 100*75 and tiles are 32*32 pixel.I want to use batching for performance .I don't figure it out , because of my game needs scrolling and every frame i draw 22*16 tiles (my screen is 20*16 tile) .I thought that batching tiles for every frame .Is it good or any suggestion? edit :to more clarify I want to use occlusion culling and batching at the same time.I thought that drawing only visible areas and batching them together .But there is a something i couldn't figure out .When scrolling screen with translate matrix , if one row become invisible , I bind new row and batch them again.Every batched objects needs to buffer again.So I batch tiles and buffer to VBO every time when one row become invisible .I don't know these way is efficient or not .This is my question .And i am open to any suggestions.

    Read the article

  • Spherical harmonics lighting interpolation

    - by TravisG
    I want to use hardware filtering to smooth out colors in texels of a texture when I'm accessing texels at coordinates that are not directly at the center of the texel, the catch being that the texels store 2 bands of spherical harmonics coefficients (=4 coefficients), not RGBA intensity values. Can I just use hardware filtering like that (GL_LINEAR with and without mip mapping) without any considerations? In other terms: If I were to first convert the coefficients back to intensity representations, than manually interpolate between two intensities, would the resulting intensity be the same as if I interpolated between the coefficient vectors directly and then converted the interpolated result to intensities?

    Read the article

  • How many VBOs should I use and should I keep a copy of their data?

    - by CSharpie
    Firstofall, I am sorry if my question is to broad. I am developing a tile based game and switched from those gl.Begin calls to using VBOs. This is kind of working allready, I managed to render a hexagonal polygon with a simple shader applied. What I am not sure is, how to implement the "whole" tile concept. Concrete the questions are: Is it better to create 1 VBO for a single tile and render it n-Times in every different position, or render one huge VBO that represents the whole "world" Depending on the answer above, what is the best way to draw a "linegrid". Overlay with the same vbo using the respecting polygon.mode , or is there a way to let the shader to this? How would frustum-culling or mousepicking work then, do i need to keep the VBO-data in memory?

    Read the article

  • Save Zone Implementation in Asteroids

    - by Moaz
    I would like to implement a safe zone for asteroids so that when the ship gets destroyed, it shouldn't be there unless it is safe from other asteroids. I tried to check the distance between each asteroid and the ship, and if it is above threshold, it sets a flag to the ship that's a safe zone, but sometimes it work and sometimes it doesn't for (list<Asteroid>::iterator itr_astroid = asteroids.begin(); itr_astroid!=asteroids.end(); ) { if(currentShip.m_state == Ship::Ship_Dead) { float distance = itr_astroid->getCenter().distance(Vec2f(getWindowWidth()/2,getWindowHeight()/2)); if( distance>200) { currentShip.m_saveField = true; break; } else { currentShip.m_saveField = false; itr_astroid++; } } else { itr_astroid++; } }

    Read the article

  • Having trouble's understanding NIF model file format?

    - by NoobScratcher
    I'm attempting too develop a 3rd party application to make it easy to import 3d model part's into my mod for skyrim the plan was to have a fileviewer and preview window of the nif model but since , I don't know what the NIF file format actually is or where to get the vertex data from it or the hole nine yards of parsing a text file in detail I'm at a lost what to do. I'm very good at C++ but not at this super over complicated file formats , id much prefer .obj over the nif file format specification here -- http://niftools.sourceforge.net/doc/nif/index.html If someone could help me in understanding the file format in a natural and simple way and the exact parsing needed to create the 3D Model in the frustum and a explanation on how you figured that out would be happy to know. I use cygwin , notepad++ , win32 7

    Read the article

  • Visitor-pattern vs inheritance for rendering

    - by akaltar
    I have a game engine that currently uses inheritance to provide a generic interface to do rendering: class renderable { public: void render(); }; Each class calls the gl_* functions itself, this makes the code hard to optimize and hard to implement something like setting the quality of rendering: class sphere : public renderable { public: void render() { glDrawElements(...); } }; I was thinking about implementing a system where I would create a Renderer class that would render my objects: class sphere { void render( renderer* r ) { r->renderme( *this ); } }; class renderer { renderme( sphere& sphere ) { // magically get render resources here // magically render a sphere here } }; My main problem is where should I store the VBOs and where should I Create them when using this method? Should I even use this approach or stick to the current one, perhaps something else? PS: I already asked this question on SO but got no proper answers.

    Read the article

  • glutPostRedisplay() does not update display

    - by A D
    I am currently drawing a rectangle to the screen and would like to move it by using the arrow keys. However, when I press an arrow key the vertex data changes but the display does refresh to reflect these changes, even though I am calling glutPostRedisplay(). Is there something else that I must do? My code: #include <GL/glew.h> #include <GL/freeglut.h> #include <GL/freeglut_ext.h> #include <iostream> #include "Shaders.h" using namespace std; const int NUM_VERTICES = 6; const GLfloat POS_Y = -0.1; const GLfloat NEG_Y = -0.01; struct Vertex { GLfloat x; GLfloat y; Vertex() : x(0), y(0) {} Vertex(GLfloat givenX, GLfloat givenY) : x(givenX), y(givenY) {} }; Vertex left_paddle[NUM_VERTICES]; void init() { glClearColor(1.0f, 1.0f, 1.0f, 0.0f); left_paddle[0] = Vertex(-0.95f, 0.95f); left_paddle[1] = Vertex(-0.95f, 0.0f); left_paddle[2] = Vertex(-0.85f, 0.95f); left_paddle[3] = Vertex(-0.85f, 0.95f); left_paddle[4] = Vertex(-0.95f, 0.0f); left_paddle[5] = Vertex(-0.85f, 0.0f); GLuint vao; glGenVertexArrays( 1, &vao ); glBindVertexArray( vao ); GLuint buffer; glGenBuffers(1, &buffer); glBindBuffer(GL_ARRAY_BUFFER, buffer); glBufferData(GL_ARRAY_BUFFER, sizeof(left_paddle), NULL, GL_STATIC_DRAW); GLuint program = init_shaders( "vshader.glsl", "fshader.glsl" ); glUseProgram( program ); GLuint loc = glGetAttribLocation( program, "vPosition" ); glEnableVertexAttribArray( loc ); glVertexAttribPointer( loc, 2, GL_FLOAT, GL_FALSE, 0, 0); glBindVertexArray(vao); } void movePaddle(Vertex* array, GLfloat change) { for(int i = 0; i < NUM_VERTICES; i++) { array[i].y = array[i].y + change; } glutPostRedisplay(); } void special( int key, int x, int y ) { switch ( key ) { case GLUT_KEY_DOWN: movePaddle(left_paddle, NEG_Y); break; } } void display() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glDrawArrays(GL_TRIANGLES, 0, 6); glutSwapBuffers(); } int main(int argc, char **argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); glutInitWindowSize(500,500); glutCreateWindow("Rectangle"); glewInit(); init(); glutDisplayFunc(display); glutSpecialFunc(special); glutMainLoop(); return 0; }

    Read the article

  • glTexImage2D not loading my data

    - by Clyde
    Can anyone suggest why this code doesn't work? When I draw using this texture all I get is black. If I use GLUtils.texImage2D() to load a png file, it works correctly. ByteBuffer bb = ByteBuffer.allocateDirect(128*128*4).order(ByteOrder.nativeOrder()); bb.position(0); for(int row = 0; row != 128; row++) { for(int i = 0 ; i != 128 ; i++) { bb.put((byte)0x80); bb.put((byte)0xFF); bb.put((byte)0xFF); bb.put((byte)i); } } int[] handle = new int[1]; GLES20.glEnable(GLES20.GL_TEXTURE_2D); GLES20.glGenTextures(1, handle, 0); DrawAdapter.checkGlError("Gen textures"); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, handle[0]); DrawAdapter.checkGlError("Bind textures"); bb.position(0); GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, 128, 128, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bb); DrawAdapter.checkGlError("glTexImage2D"); return handle[0];

    Read the article

  • enemy behavior with boundary to change direction

    - by BadSniper
    I'm doing space shooter kind of game, the logic is to reflect the enemy if it hits the boundary. With my logic, sometimes enemy behaves like flickering instead of changing the velocity. It's like trapped in the boundary and checking for if loops. This is my code for velocity changing: if(this->enemyPos.x>14) { this->enemyVel.x = -this->enemyVel.x; } if(this->enemyPos.x<-14) { this->enemyVel.x = -this->enemyVel.x; } How can I get around this? Its going out of boundary and don't know where to go and after sometimes its coming into field. I know whats the problem is, I dont know how to get around this problem.

    Read the article

  • Animate sprite/texture position with VBO

    - by Dono
    I'm currently worlking on a renderer for my projects and I want animate a sprite on screen. I've got a spritesheet but I don't know what is the the best way to update the texture coordinates for each vertex. Update vertices then update vertex buffer. (Heavy ?) Send to the shader my texture coordinates (It is possible ?) Don't use VBO ? By the way, I've got this structure : Object class with Geometry (Faces + Vertex + Buffer) and Material (Shader + other stuff ) properties, it is a good structure ? Thanks!

    Read the article

  • Proper way to encapsulate a Shader into different modules

    - by y7haar
    I am planning to build a Shader system which can be accessed through different components/modules in C++. Each component has its own functionality like transform-relevated stuff (handle the MVP matrix, ...), texture handler, light calculation, etc... So here's an example: I would like to display an object which has a texture and a toon shading material applied and it should be moveable. So I could write ONE shading program that handles all 3 functionalities and they are accessed through 3 different components (texture-handler, toon-shading, transform). This means I have to take care of feeding a GLSL shader with different uniforms/attributes. This implies to know all necessary uniform locations and attribute locations, that the GLSL shader owns. And it would also necessary to provide different algorithms to calculate the value for each input variable. Similar functions would be grouped together in one component. A possible way would be, to wrap all shaders in a own definition file written in JSON/XML and parse that file in C++ to get all input members and create and compile the resulting GLSL. But maybe there is another way that is not so complex? So I'm searching for a way to build a system like that, but I'm not sure yet which is the best approach.

    Read the article

  • Do I lose/gain performance for discarding pixels even if I don't use depth testing?

    - by Gajoo
    When I first searched for discard instruction, I've found experts saying using discard will result in performance drain. They said discarding pixels will break GPU's ability to use zBuffer properly because GPU have to first run Fragment shader for both objects to check if the one nearer to camera is discarded or not. For a 2D game I'm currently working on, I've disabled both depth-test and depth-write. I'm drawing all objects sorted by their depth and that's all, no need for GPU to do fancy things. now I'm wondering is it still bad if I discard pixels in my fragment shader?

    Read the article

  • FBO rendering different result between Galaxy S2 and S3

    - by BruceJones
    I'm working on a pong game and have recently set up FBO rendering so that I can apply some post-processing shaders. This proceeds as so: Bind texture A to framebuffer Draw balls Bind texture B to framebuffer Draw texture A using fade shader on fullscreen quad Bind screen to framebuffer Draw texture B using normal textured quad shader Neither texture A or B are cleared at any point, this way the balls leave trails on screen, see below for the fade shader. Fade Shader private final String fragmentShaderCode = "precision highp float;" + "uniform sampler2D u_Texture;" + "varying vec2 v_TexCoordinate;" + "vec4 color;" + "void main(void)" + "{" + " color = texture2D(u_Texture, v_TexCoordinate);" + " color.a *= 0.8;" + " gl_FragColor = color;" + "}"; This works fine with the Samsung Galaxy S3/ Note2, but cause a strange effect doesnt work on Galaxy S2 or Note1. See pictures: Galaxy S3/Note2 Galaxy S3/Note2 Galaxy S2/Note Galaxy S2/Note Can anyone explain the difference?

    Read the article

  • What is wrong with my speculair phong shading

    - by Thijser
    I'm sorry if this should be placed on stackoverflow instead however seeing as this is graphics related I was hoping you guys could help me: I'm attempting to write a phong shader and currently working on the specular. I came acros the following formula: base*pow(dot(V,R),shininess) and attempted to implement it (V is the posion of the viewer and R the reflective vector). This gave the following result and code: Vec3Df phongSpecular(const Vec3Df & vertexPos, Vec3Df & normal, const Vec3Df & lightPos, const Vec3Df & cameraPos, unsigned int index) { Vec3Df relativeLightPos=(lightPos-vertexPos); relativeLightPos.normalize(); Vec3Df relativeCameraPos= (cameraPos-vertexPos); relativeCameraPos.normalize(); int DotOfNormalAndLight = Vec3Df::dotProduct(normal,relativeLightPos); Vec3Df reflective =(relativeLightPos-(2*DotOfNormalAndLight*normal))*-1; reflective.normalize(); float phongyness= Vec3Df::dotProduct(reflective,relativeCameraPos); if (phongyness<0){ phongyness=0; } float shininess= Shininess[index]; float speculair = powf(phongyness,shininess); return Ks[index]*speculair; } I'm looking for something more like this:

    Read the article

  • Common light map practices

    - by M. Utku ALTINKAYA
    My scene consists of individual meshes. At the moment each mesh has its associated light map texture, I was able to implement the light mapping using these many small textures. 1) Of course, I want to create an atlas, but how do you split atlases to pages, I mean do you group the lm's of objects that are close to each other, and load light maps on the fly if scene is expected to be big. 2) the 3d authoring software provides automatic uv coordinates for each mesh in the scene, but there are empty areas in the texel space, so if I scale the texture polygons the texel density of each face wil not match other meshes, if I create atlas like that there will be varying lm resolution, how do you solve this, just leave it as it is, or ignore resolution ? Actually these questions also applies to other non tiled maps.

    Read the article

  • Problem with gluOrtho2D()

    - by Shashwat
    I was trying to understand the gluOrtho2D function. I have drawn 4 lines originating from the center reaching up to 4 corners of the screen. You can follow the below code. osize is a variable which is used to set the parameters of gluOrtho2D. It will create a window of size 2*osize. If works fine when osize is 1. Lines reach the corners. But as I increase the value of osize, the length of the lines decreases (cross becomes smaller and does not cover the whole screen). But I think it should reach the corner. void display() { glClear( GL_COLOR_BUFFER_BIT ); //glViewport(0, 0, 100, 100); glMatrixMode (GL_PROJECTION); float osize = 1.2; //glOrtho(-osize*1.0, osize*1.0, osize*1.0, -osize*1.0, -1.0, 1.0); gluOrtho2D(-osize*1.0, osize*1.0, osize*1.0, -osize*1.0); glMatrixMode (GL_MODELVIEW); glBegin(GL_LINES); glColor3f(0.0, 0.0, 1.0); glVertex2f(0.0, 0.0); glVertex2f(-osize*1.0, -osize*1.0); glVertex2f(0.0, 0.0); glVertex2f(-osize*1.0, osize*1.0); glVertex2f(0.0, 0.0); glVertex2f(osize*1.0, -osize*1.0); glVertex2f(0.0, 0.0); glVertex2f(osize*1.0, osize*1.0); glEnd(); glutSwapBuffers(); //includes glFlush(); } What is the problem?

    Read the article

  • What is the correct way to use glTexCoordPointer?

    - by RubyKing
    I'm trying to work out how to use this function glTexCoordPointer. The man page states that I must set a pointer to the first element of the array that uses the texture cordinate. Here is my array: static const GLfloat GUIVertices[] = { //FIRST QUAD 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, -1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f, 0.94f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.94f, 0.0f, 1.0f, 1.0f, 1.0f, //2ND QUAD // x y z w X Y 1.0f, -1.0f, 0.0f, 1.0f, 1.0f, 0.0f, -1.0f, -1.0f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f, -0.94f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, -0.94f, 0.0f, 1.0f, 1.0f, 1.0, }; But how do I set the pointer correctly for the fifth element on the 2nd quad first row? I was thinking something like this: glTexCoordPointer(1, GL_FLOAT, 6, reinterpret_cast<const GLvoid *>(29 * sizeof(float)));

    Read the article

  • Really weird GL Behaviour, uniform not "hitting" proper mesh? LibGdx

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • How to configure Bullet for LookAt?

    - by AllCoder
    I'm having problems positioning Bullet objects. I am doing: ToolVec3 origin = ToolVec3( obj_posx, obj_posy, obj_posz ); ToolVec3 vmod = ToolVec3( object_sizex / 2.0f, object_sizey / 2.0f, object_sizez / 2.0f ); btTransform shapeTransform = btTransform::getIdentity(); shapeTransform.setOrigin( btVector3(origin.x+vmod.x, origin.y+vmod.y, origin.z+vmod.z) ); btDefaultMotionState* myMotionState = new btDefaultMotionState(shapeTransform); btRigidBody::btRigidBodyConstructionInfo rbInfo(mass,myMotionState,m_collisionShapes[2],localInertia); btRigidBody* body = new btRigidBody(rbInfo); I then do: btCollisionObject* colObj = m_dynamicsWorld->getCollisionObjectArray()[i]; btRigidBody* body = btRigidBody::upcast(colObj); if(body && body->getMotionState()) { btDefaultMotionState* myMotionState = (btDefaultMotionState*)body->getMotionState(); myMotionState->m_graphicsWorldTrans.getOpenGLMatrix(m); } else { colObj->getWorldTransform().getOpenGLMatrix(m); } And after obtaining the matrix m, I paste it as model matrix. I am observing few things: I must add some weird "size / 2" to object's position, to have it drawed normally, I have following "up" look at vector defined: "0.0f, -1.0f, 0.0f" – basically, Y grows up, Z grows forward (to monitor), BUT – x grows LEFT, I think there is some conflict with the X direction.. I cannot obtain consistent positioning having world setup like this How to configure this in Bullet? Why the weird + size/2 requirement?

    Read the article

  • What is the correct way to reset and load new data into GL_ARRAY_BUFFER?

    - by Geto
    I am using an array buffer for colors data. If I want to load different colors for the current mesh in real time what is the correct way to do it. At the moment I am doing: glBindVertexArray(vao); glBindBuffer(GL_ARRAY_BUFFER, colorBuffer); glBufferData(GL_ARRAY_BUFFER, SIZE, colorsData, GL_STATIC_DRAW); glEnableVertexAttribArray(shader->attrib("color")); glVertexAttribPointer(shader->attrib("color"), 3, GL_FLOAT, GL_TRUE, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, 0); It works, but I am not sure if this is good and efficient way to do it. What happens to the previous data ? Does it write on top of it ? Do I need to call : glDeleteBuffers(1, colorBuffer); glGenBuffers(1, colorBuffer); before transfering the new data into the buffer ?

    Read the article

  • Y Axis inverted on vertex output

    - by Yonathan Klijnsma
    I've got my project running and somehow it seems my vertex y components are inverted. 10 in the positive on Y goes down and 10 negative on the Y axis goes up. I can't find anything with the initialization and I am not doing any negative scaling in the view matrix. I've never had something like this happen before, does anyone have some tips or things to look for ? How I am sending verteces to the GPU ( Currently intermediate mode ) glVertex3f( x_pos_n, 10, z_pos ); I am using CG in the project but even without shaders the Y axis seems to be inverted.

    Read the article

  • how can I change object look point?

    - by jques
    I have tried to load image but system does not give permission. Please look image http://www.rps.net/gunslinger/scrnshot/gunslinger33.jpg I have two arm with two gun, I want to rotate these arm with mouse. For example, if I move mouse position to the left, arms with guns should be move also. Since this is hoby project, I am a bit confort to ask below question ; What should I do to achieve my wish ? some explanation: perspective view gun in viewing direction left click = trigger left gun perspective Feel free to change the title Thanks

    Read the article

  • General usage question of vbo

    - by CSharpie
    Firstofall, I am sorry if my question is to broad. I am developing a tile based game and switched from those gl.Begin calls to using VBOs. This is kind of working allready, I managed to render a hexagonal polygon with a simple shader applied. What I am not sure is, how to implement the "whole" tile concept. Concrete the questions are: - Is it better to create 1 VBO for a single tile and render it n-Times in every different position, or render one huge VBO that represents the whole "world" - Depending on the answer above, what is the best way to draw a "linegrid". Overlay with the same vbo using the respecting polygon.mode , or is there a way to let the shader to this? - How would frustum-culling or mousepicking work then, do i need to keep the VBO-data in memory?

    Read the article

  • camera movement along with model

    - by noddy
    I am making a game in which a cube travels along a maze with the motive of crossing the maze safely. I have two problems in this. The cube needs to have a smooth movement like it is traveling on a frictionless surface. So could someone help me achieve this. I need to have this done in a event callback function I need to move the camera along with the cube. So could someone advice me a good tutorial about camera positions along with an object?

    Read the article

  • GL_INVALID_OPERATION in glEnd

    - by Killrazor
    Hello, I'm having problems drawing a simple sprite. When I draw: void CSprite2D::render() { CHECKGL(glLoadIdentity()); CHECKGL(glEnable(GL_TEXTURE_2D)); CHECKGL(glEnable(GL_BLEND)); CHECKGL(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); m_texture->bind(); //CHECKGL(glPushMatrix()); CHECKGL(glBegin(GL_TRIANGLE_STRIP)); CHECKGL(glNormal3i(0,0,1)); CHECKGL(glTexCoord2f(m_textureAreaStart.s,m_textureAreaStart.t)); // 0,0 by default CHECKGL(glVertex3i(m_position.x,m_position.y,0)); CHECKGL(glNormal3i(0,0,1)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s,m_textureAreaStart.t)); // 1,0 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y, 0)); CHECKGL(glNormal3i(0,0,1)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s, m_textureAreaEnd.t)); // 1,1 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y + m_dimensions.y, 0)); CHECKGL(glNormal3i(0,0,1)); CHECKGL(glTexCoord2f(m_textureAreaStart.s, m_textureAreaEnd.t)); // 0,1 by default CHECKGL(glVertex3i( m_position.x, m_position.y + m_dimensions.y,0)); CHECKGL(glEnd()); //CHECKGL(glPopMatrix()); CHECKGL(glDisable(GL_BLEND)); } I'm always get an GL_INVALID_OPERATION in glEnd(). I suspect that error is not here, but I can't detect where may be. Actually, the output render seems ok. But I want to solve this situation before to catch a subtle bug tomorrow. Any idea of what could be

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >