Search Results

Search found 1704 results on 69 pages for 'm es'.

Page 20/69 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Why does this code sometimes return NaN?

    - by carrots
    This often returns NAN ("Not A Number") depending on input: #define PI 3.1415f GLfloat sineEaseIn(GLfloat ratio) { return 1.0f-cosf(ratio * (PI / 2.0f)); } I tried making PI a few digits smaller to see if that would help. No dice. Then I thought it might be a datatype mismatch, but float and glfloat seem to be equivalent: gl.h typedef float GLfloat; math.h extern float cosf( float ); Is this a casting issue?

    Read the article

  • How Can I Compress Texture in OpenGL on iPhone/iPad?

    - by nonamelive
    Hi, I'm making an iPad app which needs OpenGL to do a flip animation. I have a front image texture and a back image texture. Both the two textures are screenshots. // Capture an image of the screen UIGraphicsBeginImageContext(view.bounds.size); [view.layer renderInContext:UIGraphicsGetCurrentContext()]; image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); // Allocate some memory for the texture GLubyte *textureData = (GLubyte*)calloc(maxTextureSize*4, maxTextureSize); // Create a drawing context to draw image into texture memory CGContextRef textureContext = CGBitmapContextCreate(textureData, maxTextureSize, maxTextureSize, 8, maxTextureSize*4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(textureContext, CGRectMake(0, maxTextureSize-size.height, size.width, size.height), image.CGImage); CGContextRelease(textureContext); // ...done creating the texture data [EAGLContext setCurrentContext:context]; glGenTextures(1, &textureToView); glBindTexture(GL_TEXTURE_2D, textureToView); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, maxTextureSize, maxTextureSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData); // free texture data which is by now copied into the GL context free(textureData); Each of the texture takes up about 8MB memory, which is unacceptable for an iPhone/iPad app. Could anyone tell me how can I compress the texture to reduce the memory. I'm a complete newbie to OpenGL. Any help would be appreciated!

    Read the article

  • GLKit Memory Leak copywithZone

    - by TommyT39
    Running the instruments utility against the game I'm writing shows a bunch of memory leaks related to copy with Zone when I cycle through an array and draw some simple cube objects. Im not sure the best way to track this down as I'm new to OpenGL programming. My program is using ARC and is set to build for IOS 5. I am initializing GLKit to use OPenGl 2.0 and using the BafeEffect so I don't have to write my own shaders etc.. This shouldn't be rocket science. Im guessing that I must be not releasing something within the draw function. Below is the code to my draw function. Could you guys take a look and see if anything stands out as the problem? One other thing to note is that I'm using 15 different textures, the cubes can be 1 of 15 different ones. I have a property set on the cube class for the texture and I set it as I create the cube in there array. But I do load all 15 when my programs view did load starts.They are small .jps files that are less than 75k each and each cube uses the same texture all the way around so shouldn't be too big of an issue. Here is the code to my draw function: - (void)draw { GLKMatrix4 xRotationMatrix = GLKMatrix4MakeXRotation(rotation.x); GLKMatrix4 yRotationMatrix = GLKMatrix4MakeYRotation(rotation.y); GLKMatrix4 zRotationMatrix = GLKMatrix4MakeZRotation(rotation.z); GLKMatrix4 scaleMatrix = GLKMatrix4MakeScale(scale.x, scale.y, scale.z); GLKMatrix4 translateMatrix = GLKMatrix4MakeTranslation(position.x, position.y, position.z); GLKMatrix4 modelMatrix = GLKMatrix4Multiply(translateMatrix,GLKMatrix4Multiply(scaleMatrix,GLKMatrix4Multiply(zRotationMatrix, GLKMatrix4Multiply(yRotationMatrix, xRotationMatrix)))); GLKMatrix4 viewMatrix = GLKMatrix4MakeLookAt(0, 0, 1, 0, 0, -5, 0, 1, 0); effect.transform.modelviewMatrix = GLKMatrix4Multiply(viewMatrix, modelMatrix); effect.transform.projectionMatrix = GLKMatrix4MakePerspective(0.125*M_TAU, 1.0, 2, 0); effect.texture2d0.name = wallTexture.name; [effect prepareToDraw]; glEnable(GL_DEPTH_TEST); glEnable(GL_CULL_FACE); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, triangleVertices); glEnableVertexAttribArray(GLKVertexAttribTexCoord0); glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, textureCoordinates); glDrawArrays(GL_TRIANGLES, 0, 18); glDisableVertexAttribArray(GLKVertexAttribPosition); glDisableVertexAttribArray(GLKVertexAttribTexCoord0); }

    Read the article

  • How to set image on EAGL VIEW

    - by Viral
    hi, I want to put images from my photo library in to the EAGL view for some further processing. The image that are already in our resources folder will be taken by itself but mltiple or images from photo library can't. So any one knows how to put image on EAGL view in open GLES. Regtards viral

    Read the article

  • Does a hidden UIViewController consume any resources (iPhone)?

    - by MrDatabase
    My simple iPhone game has two basic "screens": home screen (UIViewController subclass) game screen (UIWindow w/ EAGLLayer where all the OpenGL drawing happens) Currently when the user taps "Play" on the homescreen the UIViewController is just hidden and the game screen is revealed. When the game is over the homescreen UIViewController is unhidden. Does the hidden UIViewController consume any resources when it's hidden?

    Read the article

  • Debugging a basic OpenGL texture fail? (iphone)

    - by Ben
    Hey all, I have a very basic texture map problem in GL on iPhone, and I'm wondering what strategies there are for debugging this kind of thing. (Frankly, just staring at state machine calls and wondering if any of them is wrong or misordered is no way to live-- are there tools for this?) I have a 512x512 PNG file that I'm loading up from disk (not specially packed), creating a CGBitmapContext, then calling CGContextDrawImage to get bytes out of it. (This code is essentially stolen from an Apple sample.) I'm trying to map the texture to a "quad", with code that looks essentially like this-- all flat 2D stuff, nothing fancy: glEnable(GL_TEXTURE_2D); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glEnableClientState(GL_TEXTURE_COORD_ARRAY); GLfloat vertices[8] = { viewRect.origin.x, viewRect.size.height, viewRect.origin.x, viewRect.origin.y, viewRect.size.width, viewRect.origin.y, viewRect.size.width, viewRect.size.height }; GLfloat texCoords[8] = { 0, 1.0, 0, 0, 1.0, 0, 1.0, 1.0 }; glBindTexture(GL_TEXTURE_2D, myTextureRef); // This was previously bound to glVertexPointer(2, GL_FLOAT , 0, vertices); glTexCoordPointer(2, GL_FLOAT, 0, texCoords); glDrawArrays(GL_TRIANGLE_FAN, 0, 4); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisable(GL_TEXTURE_2D); My supposedly textured area comes out just black. I see no debug output from the CG calls to set up the texture. glGetError reports nothing. If I simplify this code block to just draw the verts, but set up a pure color, the quad area lights up exactly as expected. If I clear the whole context immediately beforehand to red, I don't see the red-- which means something is being rendered there, but not the contents of my PNG. What could I be doing wrong? And more importantly, what are the right tools and techniques for debugging this sort of thing, because running into this kind of problem and not being able to "step through it" in a debugger in any meaningful way is a bummer. Thanks!

    Read the article

  • Texture repeats even with GL_CLAMP_TO_EDGE set

    - by Lliane
    Hi, i'm trying to put a translucing texture on a face which uses points 1 to 4 (don't mind the numbers) on the following screenshot Sadly as you can see the texture repeats herself in both dimensions, I tried to switch the TEXTURE_WRAP_S from REPEAT to CLAMP_to_EDGE but it doesn't change anything. Texture loading code is here : gl.glBindTexture(gl.GL_TEXTURE_2D, mTexture.get(4)); gl.glActiveTexture(4); gl.glTexParameterf(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MIN_FILTER, gl.GL_LINEAR); gl.glTexParameterf(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER, gl.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); gl.glTexImage2D(gl.GL_TEXTURE_2D, 0, gl.GL_RGBA, shadowbmp.width, shadowbmp.height, 0, gl.GL_RGBA, gl.GL_UNSIGNED_SHORT_4_4_4_4, shadowbmp.buffer); Texture coordinates are the following : float shadow_bot_text[] = { 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; Thanks

    Read the article

  • Texturing Issue when Overlaying images onto Camera preview SurfaceView

    - by Dervis Suleyman
    I am in the process of making an augmented reality application and I have successfully overlaid a 3d cube over a camera surface view. The Issue now is when I add a Texture the cube then flickers(with the texture on it) and disappears leaving the camera preview upon further research i discovered that the cube with the texture was disappearing behind the Camera preview. My Question is has anyone else had this issue and if yes what approach did you take to solve this. Heres another strange Issue when i take a print screen the cube is on-top of the camera Preview

    Read the article

  • iPhone animated banner : which framework to use

    - by Julien
    Hi folks, I'm willing to create a little frame to display animated ads in my app. It could be simple little animations, or "3D" transition between ads, or combination of both. I'm not familiar with graphic frameworks, I just used CoreGraphics, which I think is not optimized for that. I thought of OpenGL, but maybe that's too much and takes too much ressources just for this little thing. What do you think ?

    Read the article

  • Having issue with OpenGL 1.0 for HP slate 7

    - by Roy Coder
    I have issue with HP slate when i am trying to draw Line in OpenGL Draw method. But working in other devices. In Hp Slate Green line not drawn properly as like in another device. My Code is: gl.glPushMatrix(); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexFloatBuffer); gl.glColorMask(true, true, true, true); gl.glDepthMask(true); gl.glLineWidth(8.0f); setColor(gl); gl.glDrawArrays(GL10.GL_LINES, 0, fPoints.length / 2); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glPopMatrix(); Suggest me at which place i am wrong or missing something? UpdateImage

    Read the article

  • Normals per index?

    - by WarrenFaith
    I have a pyramid which has 5 vertex and 18 indices. As I want to add normals to each face I just found solution for normals for each vertex. That means I can't use indices to define my pyramid I need to have 18 vertex (and 3 times the same vertex for the same point in space). There must be a solution to use normals not on vertex base but on index base. Some code (javascript): var vertices = [ -half, -half, half, // 0 front left half, -half, half, // 1 front right half, -half, -half, // 2 back right -half, -half, -half, // 3 back left 0.0, Math.sqrt((size * size) - (2 * (half * half))) - half, 0.0 // 4 top ]; var vertexNormals = [ // front face normaleFront[0], normaleFront[1], normaleFront[2], normaleFront[0], normaleFront[1], normaleFront[2], normaleFront[0], normaleFront[1], normaleFront[2], // back face normaleBack[0], normaleBack[1], normaleBack[2], normaleBack[0], normaleBack[1], normaleBack[2], normaleBack[0], normaleBack[1], normaleBack[2], // left face normaleLeft[0], normaleLeft[1], normaleLeft[2], normaleLeft[0], normaleLeft[1], normaleLeft[2], normaleLeft[0], normaleLeft[1], normaleLeft[2], // right face normaleRight[0], normaleRight[1], normaleRight[2], normaleRight[0], normaleRight[1], normaleRight[2], normaleRight[0], normaleRight[1], normaleRight[2], // bottom face 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, ]; var pyramidVertexIndices = [ 0, 1, 4, // Front face 2, 3, 4, // Back face 3, 0, 4, // Left face 1, 2, 4, // Right face 0, 1, 2, 2, 3, 0, // Bottom face ];

    Read the article

  • screnshot in android

    - by ujjawal
    The following is the code I am using to take a screen shot using GLSurfaceView. But I dont know why the onDraw() method in the GLSurfaceView.Renderer Class is not being called. Please if some one can look at the code below and point out what am I doing wrong.`public class MainActivity extends Activity { private GLSurfaceView mGLView; int x,y,w,h; Display disp; /** Called when the activity is first created. */ @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); // ToDo add your GUI initialization code here setContentView(R.layout.main); x=0; y=0; disp = getWindowManager().getDefaultDisplay(); w = disp.getWidth(); h = disp.getHeight(); mGLView = new ClearGLSurfaceView(this); } class ClearGLSurfaceView extends GLSurfaceView { public ClearGLSurfaceView(Context context) { super(context); setDebugFlags(DEBUG_CHECK_GL_ERROR | DEBUG_LOG_GL_CALLS); mRenderer = new ClearRenderer(); setRenderer(mRenderer); } ClearRenderer mRenderer; } class ClearRenderer implements GLSurfaceView.Renderer { public void onSurfaceCreated(GL10 gl, EGLConfig config) { // Do nothing special. } public void onSurfaceChanged(GL10 gl, int w, int h) { //gl.glViewport(0, 0, w, h); } public void onDrawFrame(GL10 gl) { //gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); int b[]=new int[w*(y+h)]; int bt[]=new int[w*h]; IntBuffer ib=IntBuffer.wrap(b); ib.position(0); gl.glReadPixels(x, 0, w, y+h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib); for(int i=0, k=0; i<h; i++, k++) {//remember, that OpenGL bitmap is incompatible with Android bitmap //and so, some correction need. for(int j=0; j<w; j++) { int pix=b[i*w+j]; int pb=(pix>>16)&0xff; int pr=(pix<<16)&0x00ff0000; int pix1=(pix&0xff00ff00) | pr | pb; bt[(h-k-1)*w+j]=pix1; } } Bitmap bmp = Bitmap.createBitmap(bt, w, h,Bitmap.Config.ARGB_8888); try { File f = new File("/sdcard/testpicture.png"); f.createNewFile(); FileOutputStream fos=new FileOutputStream(f); bmp.compress(CompressFormat.PNG, 100, fos); try { fos.flush(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { fos.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch(IOException e) { e.printStackTrace(); } } } } ` Please someone help me out. I have just started learning to work on android.

    Read the article

  • How to get X,Y,Z rotations of vertices on a sphere at the origin?

    - by Stoff81
    Hey, I have a sphere in my game world and i would like to place a plane at each vertex on this sphere for debugging purposes. The planes should be orientated so that they lie flat against the sphere (perpendicular to the normals). The sphere is located at the origin, so all the vertices are relative to that. If my thinking is correct, i should be able to do this using the positions of the vertices and some simple trigonometry. I have tried a few combinations but have had no joy yet. I would greatly appreciate some help on this. Thanks. Here is my code: float xRot = RADIANS_TO_DEGREES(sinf(vertex.x/PLANET_RADIUS)); float yRot = RADIANS_TO_DEGREES(cosf(vertex.y/PLANET_RADIUS)); glRotatef(xRot, 1.0, 0, 0); glRotatef(yRot, 0, 1.0, 0);

    Read the article

  • glDrawArrays() slow on iPad?

    - by Nick
    Hey guys, I was wondering how to speed up my iPad application using OpenGLES 2.0. At the moment we have every drawable object draw itself with a call to glDrawArrays(). Blend mode is on, we really need it. Without disabling blendmode, how would we improve performance for this app? For instances, if we now draw 1 texture across the whole screen, the app only gets 15FPS, which is really slow I think? Are we doing something terribly wrong? Our drawing code (for each drawable), is as follows: - (void) draw { GLuint textureAvailable = 0; if(texture != nil){ textureAvailable = 1; } glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture.name); glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, vertices); glEnableVertexAttribArray(ATTRIB_VERTEX); glVertexAttribPointer(ATTRIB_COLOR, 4, GL_FLOAT, 1, 0, colorsWithMultipliedAlpha); glEnableVertexAttribArray(ATTRIB_COLOR); glVertexAttribPointer(ATTRIB_TEXTUREMAP, 2, GL_FLOAT, 1, 0, textureMapping); glEnableVertexAttribArray(ATTRIB_TEXTUREMAP); //Note that we are NOT using position.z here because that is only used to determine drawing order int *jnUniforms = JNOpenGLConstants::getInstance().uniforms; glUniform4f(jnUniforms[UNIFORM_TRANSLATE], position.x, position.y, 0.0, 0.0); glUniform4f(jnUniforms[UNIFORM_SCALE], scale.x, scale.y, 1.0, 1.0); glUniform1f(jnUniforms[UNIFORM_ROTATION], rotation); glUniform1i(jnUniforms[UNIFORM_TEXTURE_SAMPLE], 0); glUniform2f(jnUniforms[UNIFORM_TEXTURE_REPEAT], textureRepeat.x, textureRepeat.y); glUniform1i(jnUniforms[UNIFORM_TEXTURE_AVAILABLE], textureAvailable); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); }

    Read the article

  • Is it better to use GL_FIXED or GL_FLOAT on Android.

    - by Timmmm
    I would have assumed that GL_FIXED was faster, but the iPhone docs actually say to use GL_FLOAT because GL_FIXED has to be converted to GL_FLOAT. Is it the same on Android? I suppose it varies by phone, but what about recent popular ones (Nexus One, Droid/Milestone, etc.)? Bonus points: This appears to be completely undocumented (e.g. search google for GL_FIXED!) but where is the 'point' in GL_FIXED? I.e. how much is (GL_FIXED)1 worth?

    Read the article

  • iPhone + OpenGL + Touches: FPS drop

    - by Anton
    Hey there, Recently I ran into a very strange issue: touching the screen of the iPhone and moving a finger around can eat up to 50% of my FPS. Yeah, I checked my code for possible bottlenecks – not the issue. The last resort I tried before writing this post – commenting out all the touch processing code and looking at FPS then. Results are: no touches – 58-60. Touching and moving the finger – 35-40 FPS instantly. The rendering is done in a separate thread, so that no main runloop events shall collide with it. However, it's very crushial for me (and the game I develop) to resolve this issue, because such FPS drop is really noticeable. Thank you for your help in advance. UPDATE: seems that setting rendering thread's priority to higher value helps a bit...

    Read the article

  • How do you use glFrustrum in OpenGL ES1 on iPhone

    - by Paul
    So I am using Xcode 3.2.1 and am trying to make an iPhone OpenGL ES1 project. The default template for an opengl project is ok, but I have been trying to split the code up so not everything is done per frame on the drawView() call. I have a seperate setupRC method that sets the lighting, turns on depth test, turns on culling and sets the clear color. This is called on the init of the EAGLView and this works just fine. I have took the glViewport() and glFrustrum() calls and put them at the end of the resizeFromLayer() method in the ES1Renderer.m file. This gets hit when the app starts and when the app gets resized as it should. Now the problem is the frustrum's far seems to be messed up, as in all my objects get cut / clipped off. I tried adjusting the camera position and angle and it still all objects are cut / clipped. I increased the far from 1000.0f to 30000.0f and still get the same result. What is crazy is that if i call both the glViewport() and glFrustrum() calls in the drawView() every frame everything looks right. Nothing is clipped and looks like i want it. From everything I've been reading the frustrum and viewport calls only need to be called when the window / gets made and resizes, but If I don't call it every frame in my project it doesn't work. Any ideas? Thanks In Advance

    Read the article

  • OpenGL and layouts

    - by Hnefi
    I'm using OpenGL to render a game view in my android application. The game is turn based and I wish to add some buttons to the interface. I'd prefer to use standard Android widgets, structured in an XML-generated layout (or, if I have to, a hardcoded layout) and put the OpenGL view in its own window as part of that layout. So in regards to this, I have 3 questions: 1: Is such a thing possible? I've done a few half-hearted tries, but have had no luck so far. 2: Is such a thing advisable? Does it carry a significant performance penalty, for example, over using OpenGL-based homebrew widgetry? 3: Is it possible to pass particular arguments to instances created in XML layouts? For example, my current OpenGL view has three arguments in its constructor; is it somehow possible for me to invoke that particular constructor with particular parameters when it's part of a layout?

    Read the article

  • Texture repeats even with GL_CLAMP_TO_EDGE set [FIXED]

    - by Lliane
    Hi, i'm trying to put a translucing texture on a face which uses points 1 to 4 (don't mind the numbers) on the following screenshot Sadly as you can see the texture repeats herself in both dimensions, I tried to switch the TEXTURE_WRAP_S from REPEAT to CLAMP_to_EDGE but it doesn't change anything. Texture loading code is here : gl.glBindTexture(gl.GL_TEXTURE_2D, mTexture.get(4)); gl.glActiveTexture(4); gl.glTexParameterf(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MIN_FILTER, gl.GL_LINEAR); gl.glTexParameterf(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER, gl.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); gl.glTexImage2D(gl.GL_TEXTURE_2D, 0, gl.GL_RGBA, shadowbmp.width, shadowbmp.height, 0, gl.GL_RGBA, gl.GL_UNSIGNED_SHORT_4_4_4_4, shadowbmp.buffer); Texture coordinates are the following : float shadow_bot_text[] = { 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; Thanks

    Read the article

  • Why don't I need to bind my vertex buffer object before calling glDrawArrays?

    - by valmo
    I'm a bit confused why this still renders. I thought you need to bind a vertex buffer object so that glDrawArrays knows which vertex buffer to use. Here is my initialisation code.. // Create and bind vertex array to store vertex attribute states. glGenVertexArraysOES(NUM_VERTEX_ARRAYS, &m_vertexArray); glBindVertexArrayOES(m_vertexArray); // Create and bind vertex buffer to store vertex data. glGenBuffers(NUM_VERTEX_BUFFERS, &m_vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, m_vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * 36, &m_vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(VertexAttribPosition); glVertexAttribPointer(VertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0)); glEnableVertexAttribArray(VertexAttribNormal); glVertexAttribPointer(VertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12)); glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArrayOES(0); Here is my render code. I'm confused why glDrawArrays still works when I bind 0 to GL_ARRAY_BUFFER. glBindVertexArrayOES(m_vertexArray); glBindBuffer(GL_ARRAY_BUFFER, 0); glDrawArrays(GL_TRIANGLES, 0, 36); glBindVertexArrayOES(0);

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >