Search Results

Search found 3627 results on 146 pages for 'opengl es'.

Page 15/146 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • What data type should I use for my texture coordinates in OpenGL ES?

    - by Matthew Chen
    I notice that the default data type for texture coordinates in the OpenGL docs is GLfloat, but much of the sample code I see written by experienced iphone developers uses GLshort or GLbyte. Is this an optimization? GLfloat vertices[] = { // Upper left x1, y2, // Lower left x1, y1, // Lower right x2, y1, // Upper right x2, y2, }; glTexCoordPointer(2, GL_FLOAT, 0, iconSTs); vs. GLbyte vertices[] = { // Upper left x1, y2, // Lower left x1, y1, // Lower right x2, y1, // Upper right x2, y2, }; glTexCoordPointer(2, GL_BYTE, 0, iconSTs);

    Read the article

  • Whats the minimum iOS version which supports OpenGL ES2.0 ?

    - by Shireesh Agrawal
    Hi, I am not sure if the question even makes sense. I am writing an iPhone game which uses Opengl ES 2.0. I know that OpenGL ES 2.0 is supported on 3gs and higher. Is there a minimum requirement for iOS version too, like the device needs to have iOS 3.1.3 or higher? Or does it solely depend on the hardware? Thanks! -shireesh p.s. I tried to search on the net but havent found much, perhaps I am not using the right keywords

    Read the article

  • Why does OpenGL's glDrawArrays() fail with GL_INVALID_OPERATION under Core Profile 3.2, but not 3.3 or 4.2?

    - by metaleap
    I have OpenGL rendering code calling glDrawArrays that works flawlessly when the OpenGL context is (automatically / implicitly obtained) 4.2 but fails consistently (GL_INVALID_OPERATION) with an explicitly requested OpenGL core context 3.2. (Shaders are always set to #version 150 in both cases but that's beside the point here I suspect.) According to specs, there are only two instances when glDrawArrays() fails with GL_INVALID_OPERATION: "if a non-zero buffer object name is bound to an enabled array and the buffer object's data store is currently mapped" -- I'm not doing any buffer mapping at this point "if a geometry shader is active and mode? is incompatible with [...]" -- nope, no geometry shaders as of now. Furthermore: I have verified & double-checked that it's only the glDrawArrays() calls failing. Also double-checked that all arguments passed to glDrawArrays() are identical under both GL versions, buffer bindings too. This happens across 3 different nvidia GPUs and 2 different OSes (Win7 and OSX, both 64-bit -- of course, in OSX we have only the 3.2 context, no 4.2 anyway). It does not happen with an integrated "Intel HD" GPU but for that one, I only get an automatic implicit 3.3 context (trying to explicitly force a 3.2 core profile with this GPU via GLFW here fails the window creation but that's an entirely different issue...) For what it's worth, here's the relevant routine excerpted from the render loop, in Golang: func (me *TMesh) render () { curMesh = me curTechnique.OnRenderMesh() gl.BindBuffer(gl.ARRAY_BUFFER, me.glVertBuf) if me.glElemBuf > 0 { gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, me.glElemBuf) gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil)) gl.DrawElements(me.glMode, me.glNumIndices, gl.UNSIGNED_INT, gl.Pointer(nil)) gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, 0) } else { gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil)) /* BOOM! */ gl.DrawArrays(me.glMode, 0, me.glNumVerts) } gl.BindBuffer(gl.ARRAY_BUFFER, 0) } So of course this is part of a bigger render-loop, though the whole "*TMesh" construction for now is just two instances, one a simple cube and the other a simple pyramid. What matters is that the entire drawing loop works flawlessly with no errors reported when GL is queried for errors under both 3.3 and 4.2, yet on 3 nvidia GPUs with an explicit 3.2 core profile fails with an error code that according to spec is only invoked in two specific situations, none of which as far as I can tell apply here. What could be wrong here? Have you ever run into this? Any ideas what I have been missing?

    Read the article

  • How Can I Compress Texture in OpenGL on iPhone/iPad?

    - by nonamelive
    Hi, I'm making an iPad app which needs OpenGL to do a flip animation. I have a front image texture and a back image texture. Both the two textures are screenshots. // Capture an image of the screen UIGraphicsBeginImageContext(view.bounds.size); [view.layer renderInContext:UIGraphicsGetCurrentContext()]; image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); // Allocate some memory for the texture GLubyte *textureData = (GLubyte*)calloc(maxTextureSize*4, maxTextureSize); // Create a drawing context to draw image into texture memory CGContextRef textureContext = CGBitmapContextCreate(textureData, maxTextureSize, maxTextureSize, 8, maxTextureSize*4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(textureContext, CGRectMake(0, maxTextureSize-size.height, size.width, size.height), image.CGImage); CGContextRelease(textureContext); // ...done creating the texture data [EAGLContext setCurrentContext:context]; glGenTextures(1, &textureToView); glBindTexture(GL_TEXTURE_2D, textureToView); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, maxTextureSize, maxTextureSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData); // free texture data which is by now copied into the GL context free(textureData); Each of the texture takes up about 8MB memory, which is unacceptable for an iPhone/iPad app. Could anyone tell me how can I compress the texture to reduce the memory. I'm a complete newbie to OpenGL. Any help would be appreciated!

    Read the article

  • Porting a project to OpenGL3

    - by Decapsuleur
    Hi everyone, I'm working on a C++ cross-platform OpenGL application (Windows, Linux and MacOS) and I am wondering if some of you could share some advices on porting a large application to OpenGL 3. The reason I am looking into OpenGL 3 is because I think we could benefit a lot from using the new "Sync objects". Nvidia has supported such an extension since the Geforce 256 days (gl_nv_fences) but there seems to be no equivalent functionality on ATI hardware before OpenGL 3.0+... Our code makes quite heavy use of glut/freeglut, glu functions, OpenGL 2 extensions and CUDA (on supported hardware). The problem I am now facing is that "gl3.h" and "gl.h" are mutually incompatible (as stated in gl3.h). Do you guys know if there is a GL3 glut equivalent ? Also, looking at the CUDA-toolkit header files, it seems that GL-CUDA interoperability is only available when using older versions of OpenGL... (cuda_gl_interop.h includes gl.h...). Am I missing something ? Thanks a lot for your help.

    Read the article

  • How to detect OpenGL capabilities without creating a GLSurfaceView (Android)

    - by ADB
    I am trying to access the OpenGL capability of the phone before deciding whether to use OpenGL or Canvas for graphics puposes. However, all the functions that I can read documentation on requires you to already have a valid OpenGL context (namely, create a GLSurfaceView and assign it a rendered. Then check the OpenGL parameters in the onSurfaceCreated). So, is there a way to check the extensions, renderer name and max texture size capability of the phone BEFORE having to create any OpenGL views?

    Read the article

  • about the JOGL 2 problem

    - by Chuchinyi
    Please some help me about the JOGL 2 problem(Sorry for previous error format). I complied JOGL2Template.java ok. but execut it with following error. D:\java\java\jogl>javac JOGL2Template.java <== compile ok D:\java\java\jogl>java JOGL2Template <== execute error Exception in thread "main" java.lang.ExceptionInInitializerError at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1176) at JOGL2Template.<init>(JOGL2Template.java:24) at JOGL2Template.main(JOGL2Template.java:57) Caused by: java.lang.SecurityException: no certificate for gluegen-rt.dll in D:\ java\lib\gluegen-rt-natives-windows-i586.jar at com.jogamp.common.util.JarUtil.validateCertificate(JarUtil.java:350) at com.jogamp.common.util.JarUtil.validateCertificates(JarUtil.java:324) at com.jogamp.common.util.cache.TempJarCache.validateCertificates(TempJa rCache.java:328) at com.jogamp.common.util.cache.TempJarCache.bootstrapNativeLib(TempJarC ache.java:283) at com.jogamp.common.os.Platform$3.run(Platform.java:308) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform.loadGlueGenRTImpl(Platform.java:298) at com.jogamp.common.os.Platform.<clinit>(Platform.java:207) ... 3 more there is JOGL2Template.java source code: import java.awt.Dimension; import java.awt.Frame; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.media.opengl.GLAutoDrawable; import javax.media.opengl.GLCapabilities; import javax.media.opengl.GLEventListener; import javax.media.opengl.GLProfile; import javax.media.opengl.awt.GLCanvas; import com.jogamp.opengl.util.FPSAnimator; import javax.swing.JFrame; /* * JOGL 2.0 Program Template For AWT applications */ public class JOGL2Template extends JFrame implements GLEventListener { private static final int CANVAS_WIDTH = 640; // Width of the drawable private static final int CANVAS_HEIGHT = 480; // Height of the drawable private static final int FPS = 60; // Animator's target frames per second // Constructor to create profile, caps, drawable, animator, and initialize Frame public JOGL2Template() { // Get the default OpenGL profile that best reflect your running platform. GLProfile glp = GLProfile.getDefault(); // Specifies a set of OpenGL capabilities, based on your profile. GLCapabilities caps = new GLCapabilities(glp); // Allocate a GLDrawable, based on your OpenGL capabilities. GLCanvas canvas = new GLCanvas(caps); canvas.setPreferredSize(new Dimension(CANVAS_WIDTH, CANVAS_HEIGHT)); canvas.addGLEventListener(this); // Create a animator that drives canvas' display() at 60 fps. final FPSAnimator animator = new FPSAnimator(canvas, FPS); addWindowListener(new WindowAdapter() { // For the close button @Override public void windowClosing(WindowEvent e) { // Use a dedicate thread to run the stop() to ensure that the // animator stops before program exits. new Thread() { @Override public void run() { animator.stop(); System.exit(0); } }.start(); } }); add(canvas); pack(); setTitle("OpenGL 2 Test"); setVisible(true); animator.start(); // Start the animator } public static void main(String[] args) { new JOGL2Template(); } @Override public void init(GLAutoDrawable drawable) { // Your OpenGL codes to perform one-time initialization tasks // such as setting up of lights and display lists. } @Override public void display(GLAutoDrawable drawable) { // Your OpenGL graphic rendering codes for each refresh. } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int w, int h) { // Your OpenGL codes to set up the view port, projection mode and view volume. } @Override public void dispose(GLAutoDrawable drawable) { // Hardly used. } }

    Read the article

  • JOGL2 test compiles, but doesn't execute - help?

    - by Chuchinyi
    I have a problem with JOGL2. My JOGL2Template.java compiles fine, but executing it results in the following error: D:\java\java\jogl>javac JOGL2Template.java <== compile ok D:\java\java\jogl>java JOGL2Template <== execute error Exception in thread "main" java.lang.ExceptionInInitializerError at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1176) at JOGL2Template.<init>(JOGL2Template.java:24) at JOGL2Template.main(JOGL2Template.java:57) Caused by: java.lang.SecurityException: no certificate for gluegen-rt.dll in D:\ java\lib\gluegen-rt-natives-windows-i586.jar at com.jogamp.common.util.JarUtil.validateCertificate(JarUtil.java:350) at com.jogamp.common.util.JarUtil.validateCertificates(JarUtil.java:324) at com.jogamp.common.util.cache.TempJarCache.validateCertificates(TempJa rCache.java:328) at com.jogamp.common.util.cache.TempJarCache.bootstrapNativeLib(TempJarC ache.java:283) at com.jogamp.common.os.Platform$3.run(Platform.java:308) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform.loadGlueGenRTImpl(Platform.java:298) at com.jogamp.common.os.Platform.<clinit>(Platform.java:207) ... 3 more Here is the JOGL2Template.java source code: import java.awt.Dimension; import java.awt.Frame; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.media.opengl.GLAutoDrawable; import javax.media.opengl.GLCapabilities; import javax.media.opengl.GLEventListener; import javax.media.opengl.GLProfile; import javax.media.opengl.awt.GLCanvas; import com.jogamp.opengl.util.FPSAnimator; import javax.swing.JFrame; /* * JOGL 2.0 Program Template For AWT applications */ public class JOGL2Template extends JFrame implements GLEventListener { private static final int CANVAS_WIDTH = 640; // Width of the drawable private static final int CANVAS_HEIGHT = 480; // Height of the drawable private static final int FPS = 60; // Animator's target frames per second // Constructor to create profile, caps, drawable, animator, and initialize Frame public JOGL2Template() { // Get the default OpenGL profile that best reflect your running platform. GLProfile glp = GLProfile.getDefault(); // Specifies a set of OpenGL capabilities, based on your profile. GLCapabilities caps = new GLCapabilities(glp); // Allocate a GLDrawable, based on your OpenGL capabilities. GLCanvas canvas = new GLCanvas(caps); canvas.setPreferredSize(new Dimension(CANVAS_WIDTH, CANVAS_HEIGHT)); canvas.addGLEventListener(this); // Create a animator that drives canvas' display() at 60 fps. final FPSAnimator animator = new FPSAnimator(canvas, FPS); addWindowListener(new WindowAdapter() { // For the close button @Override public void windowClosing(WindowEvent e) { // Use a dedicate thread to run the stop() to ensure that the // animator stops before program exits. new Thread() { @Override public void run() { animator.stop(); System.exit(0); } }.start(); } }); add(canvas); pack(); setTitle("OpenGL 2 Test"); setVisible(true); animator.start(); // Start the animator } public static void main(String[] args) { new JOGL2Template(); } @Override public void init(GLAutoDrawable drawable) { // Your OpenGL codes to perform one-time initialization tasks // such as setting up of lights and display lists. } @Override public void display(GLAutoDrawable drawable) { // Your OpenGL graphic rendering codes for each refresh. } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int w, int h) { // Your OpenGL codes to set up the view port, projection mode and view volume. } @Override public void dispose(GLAutoDrawable drawable) { // Hardly used. } } Any ideas what might be the cause of these errors?

    Read the article

  • c++ most used libraries [on hold]

    - by Basaa
    I'm trying to find out whether or not I want to switch from Java to c++ for my OpenGL game programming. I now have setup a test project in VS 11 professional, with GLUT. I created my windows with GLUT, and I can render OpenGL primitives without any problems. Now my question: What library(s) is/are used mostly in the indie/semi professional industry for using OpenGL in c++? With 'using OpenGL' I mean: Creating and managing an OpenGL window Actually using the OpenGL API Handling user-input (keyboard/mouse)

    Read the article

  • What is a better abstraction layer for D3D9 and OpenGL vertex data management?

    - by Sam Hocevar
    My rendering code has always been OpenGL. I now need to support a platform that does not have OpenGL, so I have to add an abstraction layer that wraps OpenGL and Direct3D 9. I will support Direct3D 11 later. TL;DR: the differences between OpenGL and Direct3D cause redundancy for the programmer, and the data layout feels flaky. For now, my API works a bit like this. This is how a shader is created: Shader *shader = Shader::Create( " ... GLSL vertex shader ... ", " ... GLSL pixel shader ... ", " ... HLSL vertex shader ... ", " ... HLSL pixel shader ... "); ShaderAttrib a1 = shader->GetAttribLocation("Point", VertexUsage::Position, 0); ShaderAttrib a2 = shader->GetAttribLocation("TexCoord", VertexUsage::TexCoord, 0); ShaderAttrib a3 = shader->GetAttribLocation("Data", VertexUsage::TexCoord, 1); ShaderUniform u1 = shader->GetUniformLocation("WorldMatrix"); ShaderUniform u2 = shader->GetUniformLocation("Zoom"); There is already a problem here: once a Direct3D shader is compiled, there is no way to query an input attribute by its name; apparently only the semantics stay meaningful. This is why GetAttribLocation has these extra arguments, which get hidden in ShaderAttrib. Now this is how I create a vertex declaration and two vertex buffers: VertexDeclaration *decl = VertexDeclaration::Create( VertexStream<vec3,vec2>(VertexUsage::Position, 0, VertexUsage::TexCoord, 0), VertexStream<vec4>(VertexUsage::TexCoord, 1)); VertexBuffer *vb1 = new VertexBuffer(NUM * (sizeof(vec3) + sizeof(vec2)); VertexBuffer *vb2 = new VertexBuffer(NUM * sizeof(vec4)); Another problem: the information VertexUsage::Position, 0 is totally useless to the OpenGL/GLSL backend because it does not care about semantics. Once the vertex buffers have been filled with or pointed at data, this is the rendering code: shader->Bind(); shader->SetUniform(u1, GetWorldMatrix()); shader->SetUniform(u2, blah); decl->Bind(); decl->SetStream(vb1, a1, a2); decl->SetStream(vb2, a3); decl->DrawPrimitives(VertexPrimitive::Triangle, NUM / 3); decl->Unbind(); shader->Unbind(); You see that decl is a bit more than just a D3D-like vertex declaration, it kinda takes care of rendering as well. Does this make sense at all? What would be a cleaner design? Or a good source of inspiration?

    Read the article

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • OpenGL ES clarifying question regarding FBOs -- sorry can't find this info anywhere else?

    - by DevDevDev
    If I instantiate an FBO without binding a rendering buffer or a texture to it, what happens when I draw to it, nothing? Do I need to associate a rendering target (renderbuffer or texture) to have an FBO do anything? What I'm trying to do is precache some buffers and then merge them later, but that doesn't seem to work at all. Ideally I'd like to do something like glBindFramebufferOES(GL_FRAMEBUFFER_OES, fbo1); // Draw some stuff to fbo1 glBindFramebufferOES(GL_FRAMEBUFFER_OES, fbo2); // Draw some stuff to fbo2 // ... // ... // glRenderFbo(fbo1); -- Not a func // Set blending, etc. etc. // glRenderFbo(fbo2); -- Not a func

    Read the article

  • Unable to get Stencil Buffer to work in iOS 4+ (5.0 works fine). [OpenGL ES 2.0]

    - by MurderDev
    So I am trying to use a stencil buffer in iOS for masking/clipping purposes. Do you guys have any idea why this code may not work? This is everything I have associated with Stencils. On iOS 4 I get a black screen. On iOS 5 I get exactly what I expect. The transparent areas of the image I drew in the stencil are the only areas being drawn later. Code is below. This is where I setup the frameBuffer, depth and stencil. In iOS the depth and stencil are combined. -(void)setupDepthBuffer { glGenRenderbuffers(1, &depthRenderBuffer); glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8_OES, self.frame.size.width * [[UIScreen mainScreen] scale], self.frame.size.height * [[UIScreen mainScreen] scale]); } -(void)setupFrameBuffer { glGenFramebuffers(1, &frameBuffer); glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer); // Check the FBO. if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) { NSLog(@"Failure with framebuffer generation: %d", glCheckFramebufferStatus(GL_FRAMEBUFFER)); } } This is how I am setting up and drawing the stencil. (Shader code below.) glEnable(GL_STENCIL_TEST); glDisable(GL_DEPTH_TEST); glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); glDepthMask(GL_FALSE); glStencilFunc(GL_ALWAYS, 1, -1); glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE); glColorMask(0, 0, 0, 0); glClear(GL_STENCIL_BUFFER_BIT); machineForeground.shader = [StencilEffect sharedInstance]; [machineForeground draw]; machineForeground.shader = [BasicEffect sharedInstance]; glDisable(GL_STENCIL_TEST); glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); glDepthMask(GL_TRUE); Here is where I am using the stencil. glEnable(GL_STENCIL_TEST); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); glStencilFunc(GL_EQUAL, 1, -1); ...Draw Stuff here glDisable(GL_STENCIL_TEST); Finally here is my fragment shader. varying lowp vec2 TexCoordOut; uniform sampler2D Texture; void main(void) { lowp vec4 color = texture2D(Texture, TexCoordOut); if(color.a < 0.1) gl_FragColor = color; else discard; }

    Read the article

  • Differences in cg shader code for OpenGL vs. for DirectX?

    - by Cray
    I have been trying to use an existing library that automatically generates shaders (Hydrax plugin for Ogre3D). These shaders are used to render water and somewhat involved, but are not extremely complicated. However there seems to be some differences in how the cg shaders are handled by OpenGL and DirectX, more specifically, I am pretty sure that the author of the library only has debugged all the shaders for DirectX, and they work flawlessly there, but not so in OpenGL. There are no compiler errors, but the result just doesn't look the same. (And I have to run the library in OpenGL.) Isn't cg supposed to be a language that can freely use the exact same code for both platforms? Are there any specific known caveats one should know about when using the same code for them? Are there any fast ways to find what parts of the code work differently? (I am pretty sure that the shaders are the problem. Otherwise Ogre3D has great support for both problems, and everything is abstracted away nicely. Other shaders work in OpenGL, etc...)

    Read the article

  • OpenGL - Stack overflow if I do, Stack underflow if I don't!

    - by Wayne Werner
    Hi, I'm in a multimedia class in college, and we're "learning" OpenGL as part of the class. I'm trying to figure out how the OpenGL camera vs. modelview works, and so I found this example. I'm trying to port the example to Python using the OpenGL bindings - it starts up OpenGL much faster, so for testing purposes it's a lot nicer - but I keep running into a stack overflow error with the glPushMatrix in this code: def cube(): for x in xrange(10): glPushMatrix() glTranslated(-positionx[x + 1] * 10, 0, -positionz[x + 1] * 10); #translate the cube glutSolidCube(2); #draw the cube glPopMatrix(); According to this reference, that happens when the matrix stack is full. So I thought, "well, if it's full, let me just pop the matrix off the top of the stack, and there will be room". I modified the code to: def cube(): glPopMatrix() for x in xrange(10): glPushMatrix() glTranslated(-positionx[x + 1] * 10, 0, -positionz[x + 1] * 10); #translate the cube glutSolidCube(2); #draw the cube glPopMatrix(); And now I get a buffer underflow error - which apparently happens when the stack has only one matrix. So am I just waaay off base in my understanding? Or is there some way to increase the matrix stack size? Also, if anyone has some good (online) references (examples, etc.) for understanding how the camera/model matrices work together, I would sincerely appreciate them! Thanks!

    Read the article

  • How to tell if OpenGL is really working in Ubuntu 10.04

    - by Jonathan
    I have a lenovo S9e running Intel integrated graphics. Here is my lspci output related to the graphics: 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) Subsystem: Lenovo Device 3870 Flags: bus master, fast devsel, latency 0 Memory at f0580000 (32-bit, non-prefetchable) [size=512K] Capabilities: [d0] Power Management version 2 I want to know how I can make sure OpenGL support is running in full on an Ubuntu 10.04 installation. I have a few hints to think that it is not: The "Desktop Effects" will not load Apps such as stardock, when attempting to use OpenGL rendering, will display black boxes instead of transparency In the games Pioneers, the number-tile icons are suspiciously just black circles Windows games running with Wine will only support software rendering, not hardware rendering When I boot into a Knoppix LiveCD, the desktop effects do work, splendidly, meaning compiz detects my computer as capable. My problem with troubleshooting is that Canonical has basically eliminated the conf-file-based mechanism of X11 as far as I can tell, thus making it even harder to ensure graphics modules are loading properly. How do I debug and test OpenGL on m Ubuntu 10.04 installation?

    Read the article

  • Background problem of opengl 3d object over iphone camera view

    - by user292127
    Hi, I'm loading opengl 3d objects over the iphone camera view. When opengl view is loaded it's loading with a opengl 3d object with black background. The black background color will block the camera view.I just want to clear background color of opengl view so that I could load only the 3d object to the camera view. I had tried glclearcolor(1.0,1.0,1.0,0.0); but no change to background color. I had also tried to clear background color opengl view using [glview setbackgroundColor:[UIColor clearColor]];. No change in back ground color. Can any one help me with this stuff ? I'm new to opengl. Thanks in advance

    Read the article

  • Open GL ES 2.0 co-ordinate systems

    - by Chris
    Hi, I want to use Open GL ES 2.0 for a new game, but I have two questions. Q: The first is how do I set up perspective views in Open GL ES 2.0 - do I need to include Open GL ES 1.0 and use glOrtho, or is there a new way? Q: I want to use the 4th quadrant of a Cartesian co-ordinate system for my game and not use -0.5 to +0.5 for values on screen, how once the first question is answered can I achieve this? Other resources: http://iphonedevelopment.blogspot.com/2009/04/opengl-es-from-ground-up-part-3.html Thanks Chris

    Read the article

  • Fullscreen texture iPhone OpenGL ES

    - by Ben Reeves
    I'm aware that OpenGL textures on the the iphone are required to be a power of 2, is this true of OpenGL 2.0 as well? If I have an image that is 320 x 480 in size and want to draw it full screen is there any possible way to do this with OpenGL. Thanks

    Read the article

  • OpenGL programming vs Blender Software, which is better for custom video creation?

    - by iammilind
    I am learning OpenGL API bit by bit and also develop my own C++ framework library for effectively using them. Recently came across Blender software which is used for graphics creation and is in turn written in OpenGL itself. For my part time hobby of graphics learning, I want to just create small-small movie or video segments; e.g. related to construction engineering, epic stories and so on. There may be very minimal to nil mouse-keyboard interaction for those videos, unlike video games which are highly interactive. I was wondering if learning OpenGL from scratch is worth for it or should I invest my time in learning Blender software? There are quite a few good movie examples are created using Blender and are shown in its website. Other such opensource cross platform alternatives are also welcome, which can serve my aforementioned purpose.

    Read the article

  • How to fix openGL error for Vmware Workstation 9?

    - by Xxx Xxx
    I have recently installed Vmware workstation 9 on Ubuntu 12.04 & i have migrated my VM's from windows to Ubuntu 12.04 . Now i am getting openGL error & it says no 3D acceleration as shown in pic below How Do I Fix It ? UPDATED Before i have windows 7 ( host os ) & it was working fine with 3D Acceleration & no OpenGL error . Now my i have Ubuntu 12.04 ( host os ) & openGL error The Laptop is DELL INSPIRON N5110

    Read the article

  • What should I worry about when changing OpenGL origin to upper left of screen?

    - by derivative
    For self education, I'm writing a 2D platformer engine in C++ using SDL / OpenGL. I initially began with pure SDL using the tutorials on sdltutorials.com and lazyfoo.net, but I'm now rendering in an OpenGL context (specifically immediate mode but I'm learning about VAOs/VBOs) and using SDL for interface, audio, etc. SDL uses a coordinate system with the origin in the upper left of the screen and the positive y-axis pointing down. It's easy to set up my orthographic projection in OpenGL to mirror this. I know that texture coordinates are a right-hand system with values from 0 to 1 -- flipping the texture vertically before rendering (well, flip the file before loading) yields textures that render correctly... which is fine if I'm drawing the entire texture, but ultimately I'll be using tilesets and can imagine problems. What should I be concerned about in terms of rendering when I do this? If anybody has any advice or they've done this themselves and can point out future pitfalls, that would be great, but really any thoughts would be appreciated.

    Read the article

  • Are there any OpenGL ES 2.0 examples for JOGL?

    - by fjdutoit
    I've scoured the internet for the last few hours looking for an example of how to run even the most basic OpenGL ES 2 example using JOGL but "by Jupiter!" it has been a total fail. I tried converting the android example from the OpenGL ES 2.0 Programming Guide examples (and at the same time looking at the WebGL example -- which worked fine) yet without any success. Are there any examples out there? If anyone else wants some extra help regarding this question see this thread on the official Jogamp forum.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >