Search Results

Search found 3627 results on 146 pages for 'opengl es 1 1'.

Page 21/146 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Opengl ES and texcoord

    - by viraptor
    Hi, I've got some code which I would like to translate into Opengl ES. I'm not experienced with it however, so here it goes. The original code does a loop like this: glBegin(GL_TRIANGLES); for(i=0; i<num_triangles; i++) { glNormal(...); glTexCoord2f(...); glVerted3fv(...); glTexCoord2f(...); glVerted3fv(...); glTexCoord2f(...); glVerted3fv(...); } glEnd(); So that's ok - I can change the vertex handling for each triangle in the loop, into the standard: glEnableClientState (GL_VERTEX_ARRAY); glVertexPointer (3, GL_SOMETHING, 0, verts); glDrawArrays (GL_TRIANGLES, 0, 3); But how do I add the texcoord setting into this example?

    Read the article

  • Textures in Opengl ES 2 not working properly

    - by Adl
    Hi! I'm working with Opengl ES 2 on iphone and right now I am trying to get my textures working on my objects. I'm using .obj files and all the data in them are correct. I have written a parser myself to retrieve all data, I convert it to static arrays in C. I discard the material properties for now, only getting the image path from the .mtl files manually. I have an object with 336 triangles, making this non-trivial to observe, with appertaining vertices, vertex faces and texture coordinates (u,v). Passing all data into the shaders, the resulting image is this: http://img530.imageshack.us/img530/9637/pic1io.png http://img404.imageshack.us/img404/7358/pic2pg.png But it should look like this (Displaying it in an object viewer). Please ignore the material properties. http://img16.imageshack.us/img16/1401/pic3cq.png Using this image as a texture: http://img217.imageshack.us/img217/1300/shirtdiffuse.png I'm thinking it might have to do with texture coordinate faces ? It is defined in my .obj file, and I'm not using them at all. In books and tutorials I have not found anything concerning this. Regards Niclas

    Read the article

  • Set Renderbuffer Width and Height (Open GL ES)

    - by Josh Elsasser
    I'm currently experiencing an issue with an Open GL ES renderbuffer where the backing and width are are both set to 15. Is there any way to set them to the width of 320 and 480? My project is built up on Apple's EAGLView class and ES1Renderer, but I've moved it from the app delegate to a controller. I also moved the CADisplayLink outside of it (I update my game logic with the timestamp from this) Any help would be greatly appreciated. I add the glview to the window as follows: CGRect applicationFrame = [[UIScreen mainScreen] applicationFrame]; [window addSubview:gameController.glview]; [window makeKeyAndVisible]; I synthesize the controller and the glview within it. The EAGLView and Renderer are otherwise unmodified. Renderer Initialization: // Get the layer CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = TRUE; eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:FALSE], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; renderer = [[ES1Renderer alloc] init]; Render "resize from layer" Method - (BOOL)resizeFromLayer:(CAEAGLLayer *)layer { // Allocate color buffer backing based on the current layer size glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer); [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer]; glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth); glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight); NSLog(@"Backing Width:%i and Height: %i", backingWidth, backingHeight); if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"Failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)); return NO; } return YES; }

    Read the article

  • Renderbuffer Width (Open GL ES)

    - by Josh Elsasser
    I'm currently experiencing an issue with an Open GL ES renderbuffer where the backing and width are are both set to 15. Is there any way to set them to the width of 320 and 480? My project is built up on Apple's EAGLView class and ES1Renderer, but I've moved it from the app delegate to a controller. I also moved the CADisplayLink outside of it (I update my game logic with the timestamp from this) Any help would be greatly appreciated. I add the glview to the window as follows: CGRect applicationFrame = [[UIScreen mainScreen] applicationFrame]; [window addSubview:gameController.glview]; [window makeKeyAndVisible]; I synthesize the controller and the glview within it. The EAGLView and Renderer are otherwise unmodified. Renderer Initialization: // Get the layer CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = TRUE; eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:FALSE], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; renderer = [[ES1Renderer alloc] init]; Render "resize from layer" Method - (BOOL)resizeFromLayer:(CAEAGLLayer *)layer { // Allocate color buffer backing based on the current layer size glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer); [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer]; glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth); glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight); NSLog(@"Backing Width:%i and Height: %i", backingWidth, backingHeight); if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"Failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)); return NO; } return YES; }

    Read the article

  • android opengl es texture mapping into polygons

    - by kamil
    I wrote opengl es code for android to map textures on a square but i want to draw texture on polygons. When user moved the image, texture will be mapped on polygons have more vertexes. I tried the arrays combination below for pentagon but i could not find the correct triangle combination in indices array. public float vertices[] = { // -1.0f, 1.0f, 0.0f, //Top Left // -1.0f, -1.0f, 0.0f, //Bottom Left // 1.0f, -1.0f, 0.0f, //Bottom Right // 1.0f, 1.0f, 0.0f //Top Right -1.0f, 1.0f, 0.0f, //Top Left -1.0f, -1.0f, 0.0f, //Bottom Left 1.0f, -1.0f, 0.0f, //Bottom Right 1.0f, 1.0f, 0.0f, //Top Right 0.4f, 1.4f, 0.0f }; /** Our texture pointer */ private int[] textures = new int[1]; /** The initial texture coordinates (u, v) */ private float texture[] = { //Mapping coordinates for the vertices // 1.0f, 0.0f, // 1.0f, 1.0f, // 0.0f, 1.0f, // 0.0f, 0.0f, // 0.0f, 1.0f, // 0.0f, 0.0f, // 1.0f, 0.0f, // 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.7f, }; /** The initial indices definition */ private byte indices[] = { //2 triangles // 0,1,2, 2,3,0, 0,1,2, 2,3,4, 3,4,0, //triangles for five vertexes }; i draw with the code below gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer);

    Read the article

  • Android OpenGL es "glDrawTexfOES" draws upside down

    - by Alle
    I'm using OpenGL for Android to draw my 2D images. Whenever I draw something using the code: gl.glViewport(aspectRatioOffset, 0, screenWidth, screenHeight); gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); GLU.gluOrtho2D(gl, aspectRatioOffset, screenWidth + aspectRatioOffset,screenHeight, 0); gl.glMatrixMode(GL10.GL_MODELVIEW); gl.glLoadIdentity(); gl.glEnable(GL10.GL_TEXTURE_2D); gl.glBindTexture(GL10.GL_TEXTURE_2D, myScene.neededGraphics.get(ID).get(animationID).get(animationIndex)); crop[0] = 0; crop[1] = 0; crop[2] = width; crop[3] = height; ((GL11Ext)gl).glDrawTexfOES(x, y, z, width, height) I get an upside down result. I'v seen people solve this through doing: crop[0] = 0; crop[1] = height; crop[2] = width; crop[3] = -height; This does however hurt the logic in my application, so I would like the result to not be flipped upside down. Does anyone know why it happen, and any way of avoiding or solving it?

    Read the article

  • iOS OpenGL ES 1.1 jerky animation using CADisplayLink (reboot fixes for a while)

    - by timthecoder
    I'm using OpenGL ES 1.1 and CADisplayLink to animate a 3d scene. If the iOS device has been rebooted fairly recently, the animation is smooth and the time delta between two displayLink.timestamp calls is fairly even. But after a few hours or days of the iOS device being used and my app is sometimes run a few times, the animation becomes jerky and the time deltas ramp up and then reset to a lower value only to ramp up again. Like this: 2012-09-01 23:42:58.770 [2678:707] dt= 0.021139 2012-09-01 23:42:58.787 [2678:707] dt= 0.022183 2012-09-01 23:42:58.804 [2678:707] dt= 0.023223 2012-09-01 23:42:58.820 [2678:707] dt= 0.024270 2012-09-01 23:42:58.837 [2678:707] dt= 0.009679 2012-09-01 23:42:58.853 [2678:707] dt= 0.010750 2012-09-01 23:42:58.870 [2678:707] dt= 0.011766 2012-09-01 23:42:58.887 [2678:707] dt= 0.012806 2012-09-01 23:42:58.903 [2678:707] dt= 0.013847 2012-09-01 23:42:58.920 [2678:707] dt= 0.014890 2012-09-01 23:42:58.937 [2678:707] dt= 0.015933 2012-09-01 23:42:58.953 [2678:707] dt= 0.016976 2012-09-01 23:42:58.970 [2678:707] dt= 0.018011 2012-09-01 23:42:58.987 [2678:707] dt= 0.019055 2012-09-01 23:42:59.003 [2678:707] dt= 0.020097 2012-09-01 23:42:59.020 [2678:707] dt= 0.021143 2012-09-01 23:42:59.037 [2678:707] dt= 0.022181 2012-09-01 23:42:59.054 [2678:707] dt= 0.023222 2012-09-01 23:42:59.071 [2678:707] dt= 0.024288 2012-09-01 23:42:59.087 [2678:707] dt= 0.009624 2012-09-01 23:42:59.103 [2678:707] dt= 0.010728 2012-09-01 23:42:59.121 [2678:707] dt= 0.011763 2012-09-01 23:42:59.137 [2678:707] dt= 0.012808 2012-09-01 23:42:59.153 [2678:707] dt= 0.013847 2012-09-01 23:42:59.170 [2678:707] dt= 0.014891 2012-09-01 23:42:59.187 [2678:707] dt= 0.016002 2012-09-01 23:42:59.203 [2678:707] dt= 0.016979 2012-09-01 23:42:59.220 [2678:707] dt= 0.018016 2012-09-01 23:42:59.237 [2678:707] dt= 0.019042 2012-09-01 23:42:59.253 [2678:707] dt= 0.020099 2012-09-01 23:42:59.270 [2678:707] dt= 0.021138 2012-09-01 23:42:59.287 [2678:707] dt= 0.022185 2012-09-01 23:42:59.304 [2678:707] dt= 0.023222 2012-09-01 23:42:59.320 [2678:707] dt= 0.024265 2012-09-01 23:42:59.337 [2678:707] dt= 0.009681 2012-09-01 23:42:59.354 [2678:707] dt= 0.010736 And then if the iOS device is rebooted the animation is smooth again. The problem even occurs on my menu screen when almost no game related calculations are going on in the UpdateAnimation() function. I don't understand what is going on and why a fresh reboot will always fix this problem for a while.

    Read the article

  • Overlay an image over video using OpenGL ES shaders

    - by BlueVoodoo
    I am trying to understand the basic concepts of OpenGL. A week into it, I am still far from there. Once I am in glsl, I know what to do but I find getting there is the tricky bit. I am currently able to pass in video pixels which I manipulate and present. I have then been trying to add still image as an overlay. This is where I get lost. My end goal is to end up in the same fragment shader with pixel data from both my video and my still image. I imagine this means I need two textures and pass on two pixel buffers. I am currently passing the video pixels like this: glGenTextures(1, &textures[0]); //target, texture glBindTexture(GL_TEXTURE_2D, textures[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, buffer); Would I then repeat this process on textures[1] with the second buffer from the image? If so, do I then bind both GL_TEXTURE0 and GL_TEXTURE1? ...and would my shader look something like this? uniform sampler2D videoData; uniform sampler2D imageData; once I am in the shader? It seems no matter what combination I try, image and video always ends up being just video data in both these. Sorry for the many questions merged in here, just want to clear my many assumptions and move on. To clarify the question a bit, what do I need to do to add pixels from a still image in the process described? ("easy to understand" sample code or any types of hints would be appreciated).

    Read the article

  • "exception at 0x53C227FF (msvcr110d.dll)" with SOIL library

    - by Sean M.
    I'm creating a game in C++ using OpenGL, and decided to go with the SOIL library for image loading, as I have used it in the past to great effect. The problem is, in my newest game, trying to load an image with SOIL throws the following runtime error: This error points to this part: // SOIL.c int query_NPOT_capability( void ) { /* check for the capability */ if( has_NPOT_capability == SOIL_CAPABILITY_UNKNOWN ) { /* we haven't yet checked for the capability, do so */ if( (NULL == strstr( (char const*)glGetString( GL_EXTENSIONS ), "GL_ARB_texture_non_power_of_two" ) ) ) //############ it points here ############// { /* not there, flag the failure */ has_NPOT_capability = SOIL_CAPABILITY_NONE; } else { /* it's there! */ has_NPOT_capability = SOIL_CAPABILITY_PRESENT; } } /* let the user know if we can do non-power-of-two textures or not */ return has_NPOT_capability; } Since it points to the line where SOIL tries to access the OpenGL extensions, I think that for some reason SOIL is trying to load the texture before an OpenGL context is created. The problem is, I've gone through the entire solution, and there is only one place where SOIL has to load a texture, and it happens long after the OpenGL context is created. This is the part where it loads the texture... //Init glfw if (!glfwInit()) { fprintf(stderr, "GLFW Initialization has failed!\n"); exit(EXIT_FAILURE); } printf("GLFW Initialized.\n"); //Process the command line arguments processCmdArgs(argc, argv); //Create the window glfwWindowHint(GLFW_SAMPLES, g_aaSamples); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); g_mainWindow = glfwCreateWindow(g_screenWidth, g_screenHeight, "Voxel Shipyard", g_fullScreen ? glfwGetPrimaryMonitor() : nullptr, nullptr); if (!g_mainWindow) { fprintf(stderr, "Could not create GLFW window!\n"); closeOGL(); exit(EXIT_FAILURE); } glfwMakeContextCurrent(g_mainWindow); printf("Window and OpenGL rendering context created.\n"); //Create the internal rendering components prepareScreen(); //Init glew glewExperimental = GL_TRUE; int err = glewInit(); if (err != GLEW_OK) { fprintf(stderr, "GLEW initialization failed!\n"); fprintf(stderr, "%s\n", glewGetErrorString(err)); closeOGL(); exit(EXIT_FAILURE); } printf("GLEW initialized.\n"); <-- Sucessfully creates an OpenGL context //Initialize the app g_app = new App(); g_app->PreInit(); g_app->Init(); g_app->PostInit(); <-- Loads the texture (after the context is created) ...and debug printing to the console CONFIRMS that the OpenGL context was created before the texture loading was attempted. So my question is if anyone is familiar with this specific error, or knows if there is a specific instance as to why SOIL would think OpenGL isn't initialized yet.

    Read the article

  • How do I draw an OpenGL point sprite using libgdx for Android?

    - by nbolton
    Here's a few snippets of what I have so far... void create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.getFileHandle("res/tiles2.png", FileType.Internal), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); } void render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); } void renderSprite() { int handle = tiles.getTextureObjectHandle(); Gdx.gl.glBindTexture(GL.GL_TEXTURE_2D, handle); Gdx.gl.glEnable(GL.GL_POINT_SPRITE); Gdx.gl11.glTexEnvi(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); renderer.begin(GL.GL_POINTS); renderer.vertex(pos.x, pos.y, pos.z); renderer.end(); } create() is called once when the program starts, and renderSprites() is called for each sprite (so, pos is unique to each sprite) where the sprites are arranged in a sort-of 3D cube. Unfortunately though, this just renders a few white dots... I suppose that the texture isn't being bound which is why I'm getting white dots. Also, when I draw my sprites on anything other than 0 z-axis, they do not appear -- I read that I need to crease my zfar and znear, but I have no idea how to do this using libgdx (perhaps it's because I'm using ortho projection? What do I use instead?). I know that the texture is usable, since I was able to render it using a SpriteBatch, but I guess I'm not using it properly with OpenGL.

    Read the article

  • How can I make OpenGL textures scale without becoming blurry?

    - by adorablepuppy
    I'm using OpenGL through LWJGL. I have a 16x16 textured quad rendering at 16x16. When I change it's scale amount, the quad grows, then becomes blurrier as it gets larger. How can I make it scale without becoming blurry, like in Minecraft. Here is the code inside my RenderableEntity object: public void render(){ Color.white.bind(); this.spriteSheet.bind(); GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0,0); GL11.glVertex2f(this.x, this.y); GL11.glTexCoord2f(1,0); GL11.glVertex2f(getDrawingWidth(), this.y); GL11.glTexCoord2f(1,1); GL11.glVertex2f(getDrawingWidth(), getDrawingHeight()); GL11.glTexCoord2f(0,1); GL11.glVertex2f(this.x, getDrawingHeight()); GL11.glEnd(); } And here is code from my initGL method in my game class GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glClearColor(0.46f,0.46f,0.90f,1.0f); GL11.glViewport(0,0,width,height); GL11.glOrtho(0,width,height,0,1,-1); And here is the code that does the actual drawing public void start(){ initGL(800,600); init(); while(true){ GL11.glClear(GL11.GL_COLOR_BUFFER_BIT); for(int i=0;i<entities.size();i++){ ((RenderableEntity)entities.get(i)).render(); } Display.update(); Display.sync(100); if(Display.isCloseRequested()){ Display.destroy(); System.exit(0); } } }

    Read the article

  • Trying to use OpenGL in Java on Netbeans but getting an error. Please help [migrated]

    - by Steven Rogers
    I am on a Mac running Netbeans 6.9. I downloaded and installed LWJGL using this tutorial down to the letter: http://lwjgl.org/wiki/index.php?title=Setting_Up_LWJGL_with_NetBeans I finished the installation and copied sample code to see if my system is working. I got a bug, and was not sure if it was because of faulty code or i was doing something wrong. So i shortened down the code to this little simple bit: package javaopengl; import org.lwjgl.Sys; import org.lwjgl.opengl.Display; //Testing public class Main { public static void main(String[] args) { boolean fullscreen = (args.length == 1 && args[0].equals("-fullscreen")); try { Display.create(); Display.destroy(); } catch (Exception e) { e.printStackTrace(System.err); } System.exit(0); } } But i still get the same error, this is the error that i get: run: Exception in thread "main" java.lang.NoClassDefFoundError: = Caused by: java.lang.ClassNotFoundException: = at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Java Result: 1 BUILD SUCCESSFUL (total time: 0 seconds) I am not sure what exactly is going on, Would you please tell me what is going on and how to fix it? It would be greatly appreciated, and thank you. Note: When i am looking at the text in the development environment, it does not show those red lines indicating there are any errors.

    Read the article

  • How do I get the correct values from glReadPixels in OpenGL 3.0?

    - by NoobScratcher
    I'm currently trying to Implement mouse selection into my game editor and I ran into a little problem when I look at the values stored in &pixel[0],&pixel[1],&pixel[2],&pixel[3]; I get r: 0 g: 0 b: 0 a: 0 As you can see I'm not able to get the correct values from glReadPixels(); My 3D models are red colored using glColor3f(255,0,0); I was hoping someone could help me figure this out. Here is the source code: case WM_LBUTTONDOWN: { GetCursorPos(&pos); ScreenToClient(hwnd, &pos); GLenum err = glGetError(); while (glGetError() != GL_NO_ERROR) {cerr << err << endl;} glReadPixels(pos.x, SCREEN_HEIGHT - 1 - pos.y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, &pixel[0] ); cerr << "r: "<< (int)pixel[0] << endl; cerr << "g: "<< (int)pixel[1] << endl; cerr << "b: "<< (int)pixel[2] << endl; cerr << "a: "<< (int)pixel[3] << endl; cout << pos.x << endl; cout << pos.y << endl; } break; I use : WIN32 API OPENGL 3.0 C++

    Read the article

  • Why the clip space in OpenGL has 4 dimensions?

    - by user827992
    I will use this as a generic reference, but the more i browser online docs and books, the less i understand about this. const float vertexPositions[] = { 0.75f, 0.75f, 0.0f, 1.0f, 0.75f, -0.75f, 0.0f, 1.0f, -0.75f, -0.75f, 0.0f, 1.0f, }; in this online book there is an example about how to draw the first and classic hello world for OpenGL about making a triangle. The vertex structure for the triangle is declared as stated in the code above. The book, as all the other sources about this, stress the point that the Clip Space is a 4D structure that is used to basically decide what will be rasterized and rendered to the screen. Here I have my questions: i can't imagine something in 4D, i don't think that a human can do that, what is a 4D for this Clip space ? the most human-readable doc that i have read speaks about a camera, which is just an abstraction over the clipping concept, and i get that, the problem is, why not using the concept of a camera in the first place which is a more familiar 3D structure? The only problem with the concept of a camera is that you need to define the prospective in other way and so you basically have to add another statement about what kind of camera you wish to have. How i'm supposed to read this 0.75f, 0.75f, 0.0f, 1.0f ? All i get is that they are all float values and i get the meaning of the first 3 values, what does it mean the last one?

    Read the article

  • Can i change the order of these OpenGL / Win32 calls?

    - by Adam Naylor
    I've been adapting the NeHe ogl/win32 code to be more object orientated and I don't like the way some of the calls are structured. The example has the following pseudo structure: Register window class Change display settings with a DEVMODE Adjust window rect Create window Get DC Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Show the window Set it to foreground Set it to having focus Resize the GL scene Init GL The points in bold are what I want to move into a rendering class (the rest are what I see being pure win32 calls) but I'm not sure if I can call them after the win32 calls. Essentially what I'm aiming for is to encapsulate the Win32 calls into a Platform::Initiate() type method and the rest into a sort of Renderer::Initiate() method. So my question essentially boils down to: "Would OpenGL allow these methods to be called in this order?" Register window class Adjust window rect Create window Get DC Show the window Set it to foreground Set it to having focus Change display settings with a DEVMODE Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Resize the GL scene Init GL (obviously passing through the appropriate window handles and device contexts.) Thanks in advance.

    Read the article

  • How can I render a semi transparent model with OpenGL correctly?

    - by phobitor
    I'm using OpenGL ES 2 and I want to render a simple model with some level of transparency. I'm just starting out with shaders, and I wrote a simple diffuse shader for the model without any issues but I don't know how to add transparency to it. I tried to set my fragment shader's output (gl_FragColor) to a non opaque alpha value but the results weren't too great. It sort of works, but it looks like certain model triangles are only rendered based on the camera position... It's really hard to describe what's wrong so please watch this short video I recorded: http://www.youtube.com/watch?v=s0JqA0rZabE I thought this was a depth testing issue so I tried playing around with enabling/disabling depth testing and back face culling. Enabling back face culling changes the output slightly but the problem in the video is still there. Enabling/disabling depth testing doesn't seem to do anything. Could anyone explain what I'm seeing and how I can add some simple transparency to my model with the shader? I'm not looking for advanced order independent transparency implementations. edit: Vertex Shader: // color varying for fragment shader varying mediump vec3 LightIntensity; varying highp vec3 VertexInModelSpace; void main() { // vec4 LightPosition = vec4(0.0, 0.0, 0.0, 1.0); vec3 LightColor = vec3(1.0, 1.0, 1.0); vec3 DiffuseColor = vec3(1.0, 0.25, 0.0); // find the vector from the given vertex to the light source vec4 vertexInWorldSpace = gl_ModelViewMatrix * vec4(gl_Vertex); vec3 normalInWorldSpace = normalize(gl_NormalMatrix * gl_Normal); vec3 lightDirn = normalize(vec3(LightPosition-vertexInWorldSpace)); // save vertexInWorldSpace VertexInModelSpace = vec3(gl_Vertex); // calculate light intensity LightIntensity = LightColor * DiffuseColor * max(dot(lightDirn,normalInWorldSpace),0.0); // calculate projected vertex position gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: // varying to define color varying vec3 LightIntensity; varying vec3 VertexInModelSpace; void main() { gl_FragColor = vec4(LightIntensity,0.5); }

    Read the article

  • Proper way to do texture mapping in modern OpenGL?

    - by RubyKing
    I'm trying to do texture mapping using OpenGL 3.3 and GLSL 150. The problem is the texture shows but has this weird flicker I can show a video here. My texcords are in a vertex array. I have my fragment color set to the texture values and texel values. I have my vertex shader sending the texture cords to texture cordinates to be used in the fragment shader. I have my ins and outs setup and I still don't know what I'm missing that could be causing that flicker. Here is my code: Fragment shader #version 150 uniform sampler2D texture; in vec2 texture_coord; varying vec3 texture_coordinate; void main(void) { gl_FragColor = texture(texture, texture_coord); } Vertex shader #version 150 in vec4 position; out vec2 texture_coordinate; out vec2 texture_coord; uniform vec3 translations; void main() { texture_coord = (texture_coordinate); gl_Position = vec4(position.xyz + translations.xyz, 1.0); } Last bit Here is my vertex array with texture coordinates: GLfloat vVerts[] = { 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f}; //tex x and y If you need to see all the code, here is a link to every file. Thank you for your help.

    Read the article

  • Unable to create context rendering error when run OpenGL application

    - by Rodnower
    Hello, I try to run Mesa gears example and I get following error: freeglut (./gears): Unable to create direct context rendering for window 'Gears' This may hurt performance. though the application runs successfully, but I guess that in future I will have much problems with productivity. I run Linux CentOS 5 on WMvare 7. Mesa's version is 6.5 Relevant output of lspci -v gives: 00:0f.0 VGA compatible controller: VMware SVGA II Adapter (prog-if 00 [VGA controller]) Subsystem: VMware SVGA II Adapter Flags: bus master, medium devsel, latency 64, IRQ 9 I/O ports at 10d0 [size=16] Memory at d0000000 (32-bit, non-prefetchable) [size=128M] Memory at d8000000 (32-bit, non-prefetchable) [size=8M] [virtual] Expansion ROM at 30000000 [disabled] [size=32K] Capabilities: [40] Vendor Specific Information Any one have idea? There is driver of vmvare for CentOS? Thank you for ahead.

    Read the article

  • Unable to create context rendering error whet run OpenGL application

    - by Rodnower
    Hello, I try to run Mesa gears example and I get following error: freeglut (./gears): Unable to create direct context rendering for window 'Gears' This may hurt performance. though the application runs successfully, but I guess that in future I will have much problems with productivity. I run Linux CentOS 5 on WMvare 7. Mesa's version is 6.5 Relevant output of lspci -v gives: 00:0f.0 VGA compatible controller: VMware SVGA II Adapter (prog-if 00 [VGA controller]) Subsystem: VMware SVGA II Adapter Flags: bus master, medium devsel, latency 64, IRQ 9 I/O ports at 10d0 [size=16] Memory at d0000000 (32-bit, non-prefetchable) [size=128M] Memory at d8000000 (32-bit, non-prefetchable) [size=8M] [virtual] Expansion ROM at 30000000 [disabled] [size=32K] Capabilities: [40] Vendor Specific Information Any one have idea? There is driver of vmvare for CentOS? Thank you for ahead.

    Read the article

  • Elérheto és letöltheto az Oracle Database 12c

    - by user645740
    Megjelent az Oracle Database ÚJ verziója, az Oracle Database 12c, számos innovációval, újdonsággal, új funkcióval. Az egyik legfontosabb a Multitenant funkció, ami a container database és pluggable database architektúrára épül, ami elsodlegesen az adatbázis konszolidációt és az adatbázis cloud megvalósításokat támogatja. Az Automatic Data Optimization a Heat Map segítségével az adatok automatikus tömörítését és osztályozott elhelyezését teszi lehetové (tiering). Emellett a biztonság, rendelkezésre állás és számos más területen vannak újdonságok. Az új verzió letöltheto: Linux x86-64, Solaris Sparc64, Solaris (x86-64) Oracle Technology Network. Lehet regisztrálni a launch webcastra: here.

    Read the article

  • Getting started with OpenGL

    - by Bryan Denny
    As you can see here I'm about to start work on a 3d project for class. Do you have any useful resources/websites/tips/etc. on someone getting started with OpenGL for the first time? The project will be in C++ and accessing OpenGL via GLUT. Thanks!

    Read the article

  • Is there any decent OpenGL SceneGraph API/framework?

    - by JohnIdol
    I am new to OpenGL. Wondering if there is any good Scenegraph API/framework for OpenGL. At the moment I am using glut with a custom node based solution: I am setting children and siblings for each node the calling a traverse function. I'd like a more flexible solution when it comes to managing dynamic elements in the scene.

    Read the article

  • Unloading vertex buffers in OpenGL

    - by Jeremy Statz
    I have an Android live wallpaper that I suspect is leaking memory, probably either textures or vertex arrays. I'm calling glDeleteTextures on my texture IDs, but don't see any sort of equivalent for my vertex buffers. I'd like to be able to be sure both my textures and buffers are getting unloaded by OpenGL, am i missing something? The documents I've found seem to suggest OpenGL just works it out on its own, but that's not giving me a lot of comfort.

    Read the article

  • Recommended OpenGL / GLUT Reference

    - by TJB
    What OpenGL / GLUT reference is the best around? Ideally I'm looking for something with C++ sample code to help me learn OpenGL as well as details about the APIs similar to what MSDN provides for .net programming. If there isn't a one stop shop, then please list the set of references I should use and what the strengths of each one is. Thanx!

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >