Search Results

Search found 2515 results on 101 pages for 'opengl es2'.

Page 63/101 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • What suggestions for a 3d game engine to support a huge terrain?

    - by codist
    There are a lot of 3d game engines around, does anyone with experience with them have advice on which one would likely be able to handle these (arbitrary) requirements? opengl mac/pc 1000x1000km terrain 1000 towns varying in size from 10 to 1000 buildings 128 people in any one location MMO type networking (no solo play) physics engine including airfoil c++ with optional scripting

    Read the article

  • Generating Bezier Control Points for an object

    - by E.F
    Hello everyone, I'm trying to draw objects using Bezier surfaces with openGL's evaluators. I am struggling with defining the control points for my objects. Can anyone please suggest ways to get the control points for an object? Is there some program that I can use to design my object then import the control points into a file that I can use in my application?

    Read the article

  • Is there anything wrong with my texture loading method ?

    - by José Joel.
    I'm a noob in openGL and trying to learn as much as possible. I'm using this method to load my openGL textures, loading every .png as RGBA4444. I'm doing anything incorrect ? - (void)loadTexture:(NSString*)nombre { CGImageRef textureImage =[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:nombre ofType:nil]].CGImage; if (textureImage == nil) { NSLog(@"Failed to load texture image"); return; } textureWidth = NextPowerOfTwo(CGImageGetWidth(textureImage)); textureHeight = NextPowerOfTwo(CGImageGetHeight(textureImage)); imageSizeX= CGImageGetWidth(textureImage); imageSizeY= CGImageGetHeight(textureImage); GLubyte *textureData = (GLubyte *)calloc(1,textureWidth * textureHeight * 4); // Por 4 pues cada pixel necesita 4 bytes, RGBA CGContextRef textureContext = CGBitmapContextCreate(textureData, textureWidth,textureHeight,8, textureWidth * 4,CGImageGetColorSpace(textureImage),kCGImageAlphaPremultipliedLast ); CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)textureWidth, (float)textureHeight), textureImage); //Convert "RRRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" to "RRRRGGGGBBBBAAAA" void *tempData = malloc(textureWidth * textureHeight * 2); unsigned int* inPixel32 = (unsigned int*)textureData; unsigned short* outPixel16 = (unsigned short*)tempData; for(int i = 0; i < textureWidth * textureHeight ; ++i, ++inPixel32) *outPixel16++ = ((((*inPixel32 >> 0) & 0xFF) >> 4) << 12) | // R ((((*inPixel32 >> 8) & 0xFF) >> 4) << 8) | // G ((((*inPixel32 >> 16) & 0xFF) >> 4) << 4) | // B ((((*inPixel32 >> 24) & 0xFF) >> 4) << 0); // A free(textureData); textureData = tempData; CGContextRelease(textureContext); glGenTextures(1, &textures[0]); glBindTexture(GL_TEXTURE_2D, textures[0]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4 , textureData); free(textureData); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); } And this is my dealloc method: - (void)dealloc { glDeleteTextures(1,textures); [super dealloc]; }

    Read the article

  • Most efficient way to draw circles for polygon outlines

    - by user146780
    I'm using OpenGL and was told I should draw circles at each vertex of my outline to get smoothness. I tried this and it works great. The problem is speed. It crippled my application to draw a circle at each vertex. I'm not sure how else to fix the anomaly of my outlines other than circles, but using display lists and trying with vertex array both were brutally slow. Thanks

    Read the article

  • How can I make 3ds files' size smaller??

    - by Shaza
    Hey all, Is there any way I can use to make the size of a 3ds file smaller?? I tried to change their length and width using 3dmax but the file size still the same?? I'm using the 3ds files in creating an OpenGl scene so I need to shrink their size as small as I can, any suggestions??

    Read the article

  • Is it safe to make GL calls with multiple threads?

    - by user146780
    I was wondering if it was safe to make GL calls with multiple threads. Basically I'm using a GLUtesselator and was wondering if I could divide the objects to draw into 4 and assign a thread to each one. I'm just wondering if this would cause trouble since the tesselator uses callback functions. Can 2 threads run the same callback at the same time as long as that callback does not access ant global variables? Are there also other ways I could optimize OpenGL drawing using multithreading? Thanks

    Read the article

  • scale random 3d model to fit in a viewport

    - by John Qualis
    How can I scale a random 3d model to fit in an opengl viewport? I am able to center the model in the middle of the view port. How do I scale it to fit it in the viewport. The model could be an airplane, a cone, an 3d object or any other random model. Appreciate any help.

    Read the article

  • iPhone. Particle system performance

    - by e40pud
    I try to draw rain and snow as particle system using Core Graphics. In simulator rendering proceeded fine but when I run my app on real device rendering is slow down. So, advise me please approaches to increase particle system drawing performance on iPhone. May be I should use OpenGL for this or CoreAnimation?

    Read the article

  • How to generate normal coordinate?

    - by rbchr
    Hi! I'm developping an game using the API opengl es 2.0. I need to know how to generate the normal coordinate because i need them to developp the lighting. I'm wondering if there is a software or an algorithm that generate normal coordinate. Great thanks!

    Read the article

  • Why are my 32bit OpenGL libraries pointing to mesa instead of nvidia, and how do I fix it?

    - by Codemonkey
    I have installed Nvidia's drivers on my Ubuntu 13 system, but according to this command (ldconfig -p | grep GL): $ ldconfig -p | grep GL libQtOpenGL.so.4 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libQtOpenGL.so.4 libGLU.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGLU.so.1 libGLEWmx.so.1.8 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGLEWmx.so.1.8 libGLEW.so.1.8 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGLEW.so.1.8 libGLESv2.so.2 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/mesa-egl/libGLESv2.so.2 libGL.so.1 (libc6,x86-64) => /usr/lib/libGL.so.1 libGL.so.1 (libc6) => /usr/lib/i386-linux-gnu/mesa/libGL.so.1 libGL.so (libc6,x86-64) => /usr/lib/libGL.so libEGL.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/mesa-egl/libEGL.so.1 The 32bit version of OpenGL is pointing to mesa's libraries instead of nvidia. This causes my Steam games to refuse to launch with the error: Could not find required OpenGL entry point 'glGetError'! Either your video card is unsupported, or your OpenGL driver needs to be updated. Why is this the case? When the nvidia installer asked me if I wanted to install "32bit compatability libraries" (or something like that) I chose yes. How do I fix this? Edit: I just reinstalled the same Nvidia driver, and that apparently removed the 32bit OpenGL driver completely: $ ldconfig -p | grep libGL.so libGL.so.1 (libc6,x86-64) => /usr/lib/libGL.so.1 libGL.so (libc6,x86-64) => /usr/lib/libGL.so Now Steam won't start: You are missing the following 32-bit libraries, and Steam may not run: libGL.so.1 Again, I chose YES when the installer asked me if I wanted to install 32bit libraries. Why are they not installed!?

    Read the article

  • How color attributes work in VBO?

    - by Jayesh
    I am coding to OpenGL ES 2.0 (Webgl). I am using VBOs to draw primitives. I have vertex array, color array and array of indices. I have looked at sample codes, books and tutorial, but one thing I don't get - if color is defined per vertex how does it affect the polygonal surfaces adjacent to those vertices? (I am a newbie to OpenGL(ES)) I will explain with an example. I have a cube to draw. From what I read in OpenGLES book, the color is defined as an vertex attribute. In that case, if I want to draw 6 faces of the cube with 6 different colors how should I define the colors. The source of my confusion is: each vertex is common to 3 faces, then how will it help defining a color per vertex? (Or should the color be defined per index?). The fact that we need to subdivide these faces into triangles, makes it harder for me to understand how this relationship works. The same confusion goes for edges. Instead of drawing triangles, let's say I want to draw edges using LINES primitives. Each edge of different color. How am I supposed to define color attributes in that case? I have seen few working examples. Specifically this tutorial: http://learningwebgl.com/blog/?p=370 I see how color array is defined in the above example to draw a cube with 6 different colored faces, but I don't understand why is defined that way. (Why is each color copied 4 times into unpackedColors for instance?) Can someone explain how color attributes work in VBO? [The link above seems inaccessible, so I will post the relevant code here] cubeVertexPositionBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexPositionBuffer); vertices = [ // Front face -1.0, -1.0, 1.0, 1.0, -1.0, 1.0, 1.0, 1.0, 1.0, -1.0, 1.0, 1.0, // Back face -1.0, -1.0, -1.0, -1.0, 1.0, -1.0, 1.0, 1.0, -1.0, 1.0, -1.0, -1.0, // Top face -1.0, 1.0, -1.0, -1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, -1.0, // Bottom face -1.0, -1.0, -1.0, 1.0, -1.0, -1.0, 1.0, -1.0, 1.0, -1.0, -1.0, 1.0, // Right face 1.0, -1.0, -1.0, 1.0, 1.0, -1.0, 1.0, 1.0, 1.0, 1.0, -1.0, 1.0, // Left face -1.0, -1.0, -1.0, -1.0, -1.0, 1.0, -1.0, 1.0, 1.0, -1.0, 1.0, -1.0, ]; gl.bufferData(gl.ARRAY_BUFFER, new WebGLFloatArray(vertices), gl.STATIC_DRAW); cubeVertexPositionBuffer.itemSize = 3; cubeVertexPositionBuffer.numItems = 24; cubeVertexColorBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexColorBuffer); var colors = [ [1.0, 0.0, 0.0, 1.0], // Front face [1.0, 1.0, 0.0, 1.0], // Back face [0.0, 1.0, 0.0, 1.0], // Top face [1.0, 0.5, 0.5, 1.0], // Bottom face [1.0, 0.0, 1.0, 1.0], // Right face [0.0, 0.0, 1.0, 1.0], // Left face ]; var unpackedColors = [] for (var i in colors) { var color = colors[i]; for (var j=0; j < 4; j++) { unpackedColors = unpackedColors.concat(color); } } gl.bufferData(gl.ARRAY_BUFFER, new WebGLFloatArray(unpackedColors), gl.STATIC_DRAW); cubeVertexColorBuffer.itemSize = 4; cubeVertexColorBuffer.numItems = 24; cubeVertexIndexBuffer = gl.createBuffer(); gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, cubeVertexIndexBuffer); var cubeVertexIndices = [ 0, 1, 2, 0, 2, 3, // Front face 4, 5, 6, 4, 6, 7, // Back face 8, 9, 10, 8, 10, 11, // Top face 12, 13, 14, 12, 14, 15, // Bottom face 16, 17, 18, 16, 18, 19, // Right face 20, 21, 22, 20, 22, 23 // Left face ] gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new WebGLUnsignedShortArray(cubeVertexIndices), gl.STATIC_DRAW); cubeVertexIndexBuffer.itemSize = 1; cubeVertexIndexBuffer.numItems = 36;

    Read the article

  • Problem when trying to use simple Shaders + VBOs

    - by Mr.Gando
    Hello I'm trying to convert the following functions to a VBO based function for learning purposes, it displays a static texture on screen. I'm using OpenGL ES 2.0 with shaders on the iPhone (should be almost the same than regular OpenGL in this case), this is what I got working: //Works! - (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth { GLfloat coordinates[] = { 0, 1, 1, 1, 0, 0, 1, 0 }; GLfloat width = (GLfloat)_width * _maxS, height = (GLfloat)_height * _maxT; GLfloat vertices[] = { -width / 2 + point.x, -height / 2 + point.y, width / 2 + point.x, -height / 2 + point.y, -width / 2 + point.x, height / 2 + point.y, width / 2 + point.x, height / 2 + point.y, }; glBindTexture(GL_TEXTURE_2D, _name); //Attrib position and attrib_tex coord are handles for the shader attributes glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, vertices); glEnableVertexAttribArray(ATTRIB_POSITION); glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates); glEnableVertexAttribArray(ATTRIB_TEXCOORD); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } I tried to do this to convert to a VBO however I don't see anything displaying on-screen with this version: //Doesn't display anything - (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth { GLfloat width = (GLfloat)_width * _maxS, height = (GLfloat)_height * _maxT; GLfloat position[] = { -width / 2 + point.x, -height / 2 + point.y, width / 2 + point.x, -height / 2 + point.y, -width / 2 + point.x, height / 2 + point.y, width / 2 + point.x, height / 2 + point.y, }; //Texture on-screen position ( each vertex is x,y in on-screen coords ) GLfloat coordinates[] = { 0, 1, 1, 1, 0, 0, 1, 0 }; // Texture coords from 0 to 1 glBindVertexArrayOES(vao); glGenVertexArraysOES(1, &vao); glGenBuffers(2, vbo); //Buffer 1 glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), position, GL_STATIC_DRAW); glEnableVertexAttribArray(ATTRIB_POSITION); glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, position); //Buffer 2 glBindBuffer(GL_ARRAY_BUFFER, vbo[1]); glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), coordinates, GL_DYNAMIC_DRAW); glEnableVertexAttribArray(ATTRIB_TEXCOORD); glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates); //Draw glBindVertexArrayOES(vao); glBindTexture(GL_TEXTURE_2D, _name); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } In both cases I'm using this simple Vertex Shader //Vertex Shader attribute vec2 position;//Bound to ATTRIB_POSITION attribute vec4 color; attribute vec2 texcoord;//Bound to ATTRIB_TEXCOORD varying vec2 texcoordVarying; uniform mat4 mvp; void main() { //You CAN'T use transpose before in glUniformMatrix4fv so... here it goes. gl_Position = mvp * vec4(position.x, position.y, 0.0, 1.0); texcoordVarying = texcoord; } The gl_Position is equal to product of mvp * vec4 because I'm simulating glOrthof in 2D with that mvp And this Fragment Shader //Fragment Shader uniform sampler2D sampler; varying mediump vec2 texcoordVarying; void main() { gl_FragColor = texture2D(sampler, texcoordVarying); } I really need help with this, maybe my shaders are wrong for the second case ? thanks in advance.

    Read the article

  • Can't get my object to point at the mouse.

    - by melignus
    I'm using a combination of SDL and OpenGL in a sort of crash course project to teach myself how this all works. I'm really only interested in OpenGL as a way to use acceleration in 2D games so I just need this to work in a 2D plane. I have been having a lot of problems today with my current issue, I would like an object to point towards the mouse while the mouse button is clicked and then of course stay pointing in that direction after the mouse is lifted. void Square::handle_input() { //If a key was pressed if( event.type == SDL_KEYDOWN ) { //Adjust the velocity switch( event.key.keysym.sym ) { case SDLK_UP: upUp = false; yVel = -1; break; case SDLK_DOWN: downUp = false; yVel = 1; break; case SDLK_LEFT: leftUp = false; xVel = -1; break; case SDLK_RIGHT: rightUp = false; xVel = 1; break; case SDLK_w: wUp = false; sAng = 1; break; case SDLK_s: sUp = false; sAng = -1; break; } } //If a key was released else if( event.type == SDL_KEYUP ) { //Adjust the velocity switch( event.key.keysym.sym ) { case SDLK_UP: upUp = true; yVel = 0; break; case SDLK_DOWN: downUp = true; yVel = 0; break; case SDLK_LEFT: leftUp = true; xVel = 0; break; case SDLK_RIGHT: rightUp = true; xVel = 0; break; case SDLK_w: wUp = true; sAng = 0; break; case SDLK_s: sUp = true; sAng = 0; break; } } //If a mouse button was pressed if( event.type == SDL_MOUSEBUTTONDOWN ) { switch ( event.type ) { case SDL_MOUSEBUTTONDOWN: mouseUp = false; mousex == event.button.x; mousey == event.button.y; break; case SDL_MOUSEBUTTONUP: mouseUp = true; break; } } } And then this is called at the end of my Object::Move call which also handles x and y translation if (!mouseUp) { xVect = mousex - x; yVect = mousey - y; radAng = atan2 ( mousey - y, mousex - x ); sAng = radAng * 180 / 3.1415926l; } Right now when I click the object turns and faces down to the bottom left but then no longer changes direction. I'd really appreciate any help I could get here. I'm guessing there might be an issue here with state versus polled events but from all the tutorials that I've been through I was pretty sure I had fixed that. I've just hit a wall and I need some advice!

    Read the article

  • Android GLSurfaceView glTexImage2D glDrawTexiOES

    - by Cinar
    I'm trying to render a 640x480 RGB565 image using OpenGL ES on Android using GLSurfaceView and Native C code. Initially I had a 0x0501 error with glTexImage2D, which I was able to resolve by changing the image dimensions. But now, in the "drawFrame" call, when I do glDrawTexiOES to resnder the texture, I'm getting the following error on the Logs: drawtex.c:89: DrawTexture: No textures enabled I'm already doing glEnable(GL_TEXTURE_2D), is there anything else I should do? Is there a complete example showing GLSurfaceView with native code using textures? Thanks in advance!

    Read the article

  • calling Qt's QGraphicsView::setViewport with a custom QGLWidget

    - by mos
    I've derived from QGLWidget before, like so: class MyGLWidget : public QGLWidget { public: // stuff... virtual void initializeGL() { /* my custom OpenGL initialization routine */ } // more stuff... }; However, I find that if I try to initialize a QGraphicsView with my custom QGLWidget as the viewport, initializeGL doesn't get called (setting a breakpoint within the Qt library, neither does QGLWidget::initializeGL() when created plain). // initializeGL, resizeGL, paintGL not called ui.graphicsView->setViewport(new MyGLWidget(QGLFormat(QGL::DoubleBuffer))); // initializeGL, resizeGL, paintGL *still* not called ui.graphicsView->setViewport(new QGLWidget(QGLFormat(QGL::DoubleBuffer))); Where is the correct location to place the code that currently resides in MyGLWidget::initializeGL()?

    Read the article

  • Counting texels using a fragment shader

    - by Brett
    Hi, I have two textures generated using a fragment shader. I want to be able to count the number of texels in each texture that are above some colour intensity. My question is how can this be done? My initial thought is to count these texels using the fragment shader before generating the texture. However, this would require some sort of global counter. I can't use occlusion queries because the textures are created from other textures. I'm using OpenGL 2.1. Any ideas appreciated. Thanks

    Read the article

  • How do I perform an HSL transform on a texture?

    - by Mason Wheeler
    If I have an OpenGL texture, and I need to perform HSL modifications on it before rendering the texture, from what I've heard I need a shader. Problem is, I know nothing about shaders. Does anyone know where I would need to look? I want to write a function where I can pass in a texture and three values, a hue shift in degrees, and saturation and lightness multipliers between 0 and 2, and then have it call a shader that will apply these transformations to the texture before it renders. The interface would look something like this: procedure HSLTransform(texture: GLuint; hShift: integer; sMult, lMult: GLfloat); I have no idea what's supposed to go inside the routine, though. I understand the basic math involved in HSL/RGB conversions, but I don't know how to write a shader or how to apply it. Can someone point me in the right direction? Delphi examples preferred, but I can also read C if I have to.

    Read the article

  • Dislpay List and transformation

    - by Gary
    Greetings! I have this question. Whenever, I enter a transformation (gltranslate, glrotate, glscale) within a display list, the transformation remains as a command within the display list. Everytime the display list is rendered, it will calculate all and over again. Is there a way, I can make an opengl transformation and the transformed vertex coordinates be stored permanently in a display list instead of transformation & intial coordinates? Hope my question makes sense. Thank you in advance. Gary

    Read the article

  • Vertex Buffer Object not drawing in SDL window

    - by intregus
    I'm just using the opengl SDL template with Xcode, and everything runs fine. I removed the Atlantis code, and changed the main extension to .mm, then added some testing code to drawGL. Drawing a simple triangle (using immediate mode) at this point inside drawGL gives me a white triangle, but when I add the code to draw using a vertex buffer object, i just get a black window. Here is my VBO drawing code: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear The Screen And The Depth Buffer glLoadIdentity(); GLuint buffer; float vertices[] = { 0.0f, 1.0f, 0.0f, -1.0f,-1.0f, 0.0f, 1.0f,-1.0f, 0.0f }; // VBO doesn't work :( glGenBuffers(1, &buffer); glBindBuffer(GL_ARRAY_BUFFER, buffer); glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 9, vertices, GL_STATIC_DRAW); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, vertices); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableClientState(GL_VERTEX_ARRAY);

    Read the article

  • Retrieving FBO data in GLSL

    - by Tom Savage
    I'm trying to get MRT working in OpenGL to try out deferred rendering. Here's the situation as I understand it. Create 3 render buffers (for example). Two RGBA8 and one Depth32. Create an FBO. Attach render buffers to FBO. ColorAttachment0/1 for color buffers, DepthAttachment for depth buffer. Bind the FBO. Draw geometry. Send data to different attachments using gl_FragData[] in the frag shader. At this point I would want to take the data in another pass using GLSL, how can a) retrieve data from the framebuffer color attachments, b) get data from the depth component.

    Read the article

  • Projecting a 3D point to 2D screen coordinate OpenTK

    - by sinsro
    Using Monotouch and OpenTK I am trying to get the screen coordinate of one 3D point. I have my world view projection matrix set up, and OpenGL makes sense of it and projects my 3D model perfectly, but how to use the same matrix to project just one point from 2D to 3D? I thought I could simply use: Vector3.Transform(ref input3Dpos, ref matWorldViewProjection, out projected2Dpos); Then have the projected screen coordinate in projected2DPos. But the resulting Vector4 does not seem to represent the proper projected screen coordinate. And I do not know how to calculate it from there on.

    Read the article

  • OpenGLES mix and match blending options complicated question

    - by DevDevDev
    So I have a background line drawing, black and white. I want to be able to draw over this, keeping the black lines in there, and drawing only where it is white. I can do this by using glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA); And the black stays black and white gets whatever color I want. However in the white areas, I want to be able to draw some color, pick to another color and then blend. However this doesn't work because the blend function is messed up. So what I was thinking is having a linedrawing framebuffer, and a user-drawing framebuffer. The user draws into the userdrawing framebuffer, with a different blending, and then I switch blending options and draw into the linedrawing framebuffer. But i don't know enough OpenGL to say whether or not this will work or is a stupid idea. Thanks a lot

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >