Search Results

Search found 3627 results on 146 pages for 'opengl es 1 1'.

Page 24/146 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Passing Boost uBLAS matrices to OpenGL shader

    - by AJM
    I'm writing an OpenGL program where I compute my own matrices and pass them to shaders. I want to use Boost's uBLAS library for the matrices, but I have little idea how to get a uBLAS matrix into OpenGL's shader uniform functions. matrix<GLfloat, column_major> projection(4, 4); // Fill matrix ... GLuint projectionU = glGetUniformLocation(shaderProgram, "projection"); glUniformMatrix4fv(projectionU, 1, 0, (GLfloat *)... Um ...); Trying to cast the matrix to a GLfloat pointer causes an invalid cast error on compile.

    Read the article

  • iPhone status bar orientation with opengl

    - by sfider
    I have opengl only view that displays in portrait and landscape mode using projection matrix (view's transformation is identity all the time). I need to show status bar with proper orientation. I do this by setting status bar orientation property in UIApplication and changing frame of opengl view so the view won't go under status bar. When I change from landscape to portrait (landscape is the initial state) view's frame is set to (0, 20, 320, 460) and stays like this. However view appears to be translated by (-10, -10). It seems that I did change the size of view but couldn't move it. Weird things are: initialy view is full screen, I change it to (0, 0, 300, 480) (landscape with status bar) and it works then, it doesn't work when I try to chenge it for the second time (portrait with status bar) frame property of the view shows that view is placed correctly Any thoughts on what can by the problem?

    Read the article

  • Include Problem with Objective-C++ and OpenGL

    - by Stephen Furlani
    Hello, I feel silly asking this but I've searched for 'include problems' and have only come up with basic stuff. I'm working with an API that includes/imports in their header files (ARGH! HATE ANGER DESTRUCTION). One of these Obj-C files #import "OpenGL/CGLMacros.h" which #define's things like glMatrixMode(...); In my code I need the glMatrixMode(...); from #include "OpenGL/gl.h" but it won't access it! I can't edit the headers from the (poorly) coded API to put the includes in their definition files. What can I do? If the CGLMacros.h file starts out like /* Copyright: (c) 1999 by Apple Computer, Inc., all rights reserved. */ #ifndef _CGLMACRO_H #define _CGLMACRO_H Can I put a #define _CGLMACRO_H before I include the offending API header file? -Stephen

    Read the article

  • Texture allocations being doubled in iPhone OpenGL ES

    - by Kyle
    The below couple lines are called 15 times during initialization. The tx-size is reported at 512 everytime, so this will allocate a 1mb image in memory 15 times, for a total of 15mb used.. However, I noticed instruments is reporting a total of 31 allocations! (15*2)+1 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tx-size, tx-size, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); Likewise in another area of my program that allocates 6 256x256x4 (256kB) textures.. I see 13 sitting there. (6*2)+1 Anyone know what's going on here? It seems like awful memory management, and I really hope it's my fault. Just to let everyone know, I'm on the simulator.

    Read the article

  • iPhone+Quartz+OpenGL. What is the correct way for Quartz and OpenGL to play nice together regarding

    - by dugla
    So we know the CoreGraphics/Quartz imaging model is based on pre-multiplied alpha. We also know that OpenGL blending is based on un-premultiplied alpha. What is the best practice to avoid head explosion when doing blending with textures that are derived from pre-multiplied alpha imagery (PNG files generated in Photoshop with pre-multiplied alpha). Given the apples/oranges mish mash of Quartz and OpenGL, what is the correct glBlendFunc for doing the fundamental Porter/Duff "over" operation? Typical example: A simple paint program. Brush shapes are texture-map patterns created from pre-multiplied alpha rgba images. Paint color is specified via glColor4(...) with the alpha channel used to control paint transparency. GL_MODULATE is used so the brush texture multiplies the (translucent) paint color to blend the color into the canvas. Problem: The texture is premult. The color is not. What is the correct way to handle this fundamental inconsistency? Thanks, Doug

    Read the article

  • OpenGL extensions available on different Android devices

    - by MH114
    I'm in the process of writing an OpenGL ES powered framework for my next Android game(s). Currently I'm supporting three different techniques of drawing sprites: the basic way: using vertex arrays (slow) using vertex-buffer-objects (VBOs) (faster) using the draw_texture extension (fastest, but only for basic sprites, i.e. no transforming) Vertex arrays are supported in OpenGL ES 1.0 and thus in every Android-device. I'm guessing most (if not all) of the current devices also support VBOs and draw_texture. Instead of guessing, I'd like to know the extensions supported by different devices. If majority of devices support VBOs, I could simplify my code and focus only on VBOs + draw_texture. It'd be helpful to know what different devices support, so if you have an Android-device, do report the extensions list please. :) String extensions = gl.glGetString(GL10.GL_EXTENSIONS); I've got a HTC Hero, so I can share those extensions next.

    Read the article

  • problem loading texture with transparency with OpenGL ES and Android

    - by Evan Kimia
    Im trying to load an image that has background transparency that will be layered over another texture. When i try and load it, all i get is a white screen. The texture is 512 by 512, and its saved in photoshop as a 24 bit PNG (standard PNG specs in the Photoshop Save for Web and Devices config window). Any idea why its not showing? The texture without transparency shows without a problem. Here is my loadTextures method: public void loadGLTexture(GL10 gl, Context context) { //Get the texture from the Android resource directory Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.m1); Bitmap normalScheduleLines = BitmapFactory.decodeResource(context.getResources(), R.drawable.m1n); //Generate texture pointers... gl.glGenTextures(3, textures, 0); //...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[1]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR_MIPMAP_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL11.GL_TRUE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); //Bind our normal schedule bus map lines gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR_MIPMAP_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL11.GL_TRUE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGBA, normalScheduleLines, 0); normalScheduleLines.recycle(); }

    Read the article

  • Android: Deciding between SurfaceView and OpenGL (GLSurfaceView)

    - by Rich
    Is there a way to decide up front based on the expected complexity of a game/app in the planning phase whether to use regular Canvas drawing in a SurfaceView or to go with OpenGL? I've been playing around with a Canvas and only need 2D movement, and on a fairly new phone I'm getting pretty decent performance with a bunch of primitive objects and a few bitmaps running around the screen on a solid background. Is it fair to say that if I'm going to be drawing background images and increasing the number of objects being moved and drawn on top of them that I should go straight to OpenGL?

    Read the article

  • Screen capture of MDI app with OpenGL graphics using MFC

    - by NPVN
    In our MDI application - which is written in MFC - we have a function to save a screenshot of the MDI client area to file. We are currently doing a BitBlt from the screen into a bitmap, which is then saved. The problem is that some of the MDI child windows have their content rendered by OpenGL, and in the destination bitmap these areas show up as blank or garbled. I have considered some alternatives: - Extract the OpenGL content directly (using glReadPixels), and draw this to the relevant portions of the screen bitmap. - Simulate an ALT+PrtScr, since doing this manually seems to get the content just fine. This will trash the clipboard content, though. - Try working with the DWM. Appart from Vista and Win7, this also needs to work on Win2000 and XP, so this probably isn't the way to go. Any input will be appreciated!

    Read the article

  • Create a bubble effect on a grid OpenGL-ES

    - by Charles Michael
    Hi there. I have created a grid with 40 x 40 vertex3D (small but useful) I can pick a single vertex out of that grid by simply calling a function with the position array[X][Y], And therefore neighbors too. How can I raise up neighbor vertex Z value so they kinda look like a bubble or sphere kind of thingy? My first tough was to use: Neighbor_vertex.Z = sin(PI/4 * 1 - ( 1/ distance_between_Neighbor_and_Pivot) ) * desired_Max_Height But all I got is something like a wave.... and I would like to have a bubble or Sphere like shape. THX dudes and dudettes

    Read the article

  • Frame skipping with OpenGL and WinAPI?

    - by user146780
    Here is my situation. I'm creating a drawing application using OpenGL and WinAPI. My OpenGL frame has scrollbars which renders the screen and modifies GlTranslatef when it gets a scroll message. The problem is wen I get too many shapes the scrollbar is less responsive since it cannot rerender it each and every time it gets a scroll message. How could I make it so the scrollbar has priority. I want it to skip drawing if it would compromise the smoothness of the scrolling. I thought of doing rendering on a separate thread but I was told all UI things should stay on the same thread. Thanks

    Read the article

  • Rendering splash screen on the iPhone using Open GL ES

    - by Rich
    Hi, I want to render a splash screen on the iPhone whilst using an Open GL view. The iPhone screen as we know is 320x480, which is not a power of 2. Before I enter into the world of chopping the texture up and rendering sub parts, or embedding the screen on another texture page I was wondering if there was another way? Is it possible to overlay another view that I could render to using CoreGraphics functions? Or is it possible to render to a Open GL surface using Core Graphics functions. What would you recommend? Cheers Rich

    Read the article

  • xCode screensaver with openGL

    - by moka
    Hi, I am currently simply trying to build a simple screen saver in xcode 3.2 on osx 10.6.3 using an openGL view as described in this article: http://cocoadevcentral.com/articles/000089.php anyways even if I use the exact same code from the example all I see when testing the screen saver is a black screen. I looked in OSX Console if it tells me anything useful. the only thing I get is something like this: [0x0-0x1e01e].com.apple.systempreferences[629] System Preferences(629,0x7fff71071be0) malloc: reference count underflow for 0x20057be80, break on auto_refcount_underflow_error to debug. System Preferences[629] invalid context I have no idea what is wrong, so I would be glad if someone could tell me how to use openGL together with the screensaver template in xCode 3.2. Also, is there a way to make another target so I can preview the screensaver from within xCode? Thanks!

    Read the article

  • opengl es render but dont show on display

    - by Sponge
    i have written a object selection algorithm which picks the objects by there color. i give every object an unique color and then i just have to use the glReadPixels method to check which object was selected this works fine and is really fast but the problem is that the frame is displayed on the screen with all the picking-colors so the screen flashes every time you select something. so my question is: how do i write everything in the correct display buffer but dont display it on the screen to avoid these flashes?

    Read the article

  • OpenGL - lighting of vertices outside clip range

    - by hmp
    I have a problem with lighting in my OpenGL application. When one of the vertices of a drawn polygon goes outside the front clip plane (or has z<0, I'm not sure which), the polygon stops being lighted properly. This however happens on only one machine I tested, with Intel GMA950 card. On nVidia and ATI cards everything looks fine. I guess I am breaking some OpenGL rule here? How should I deal with it? I'd try dividing the scene into smaller polygons, but I'm not sure if it guarantees the case is eliminated (all polygons stepping outside the clipping range are offscreen).

    Read the article

  • Modifying an image with OpenGL ?

    - by chmike
    I have a device to acquire XRay images. Due to some technical constrains, the detector is made of heterogeneous pixel size and multiple tilted and partially overlapping tiles. The image is thus distorted. The detector geometry is known precisely. I need a function converting these distorted images into a flat image with homogeneous pixel size. I have already done this by CPU, but I would like to give a try with OpenGL to use the GPU in a portable way. I have no experience with OpenGL programming, and most of the information I could find on the web was useless for this use. How should I proceed ? How do I do this ? Image size are 560x860 pixels and we have batches of 720 images to process. I'm on Ubuntu.

    Read the article

  • Drawing with element array in OpenGL ES

    - by FatalMojo
    Hello! I am trying to use OpenGLES to draw a x by y matrix of squares about an arbitrary point. I have an array sideVertice[] that holds a series of vertex structs defined as such typedef struct { GLfloat x; GLfloat y; GLfloat z; } Vertex3D; and an element array defined as such GLubyte elementArray[]; my draw loop is as such glLoadIdentity(); glVertexPointer(3, GL_FLOAT, 0, cube.sideVertice); for (int i=0; i<((cube.cubeSize + 1)*(cube.cubeSize + 1)); i++) { for (int j=0; j<=3; j++) { elementArray[j] = j + i*4; glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_BYTE, elementArray); } } for (int i=0; i<=3; i++) elementArray[i] = i; However, the visual output is corrupted and I cannot figure out what the problem is. here is an output of the vertice held in the array: 2010-04-15 23:44:48.816 RubixGL[4203:20b] vertex[0][0] x:-26.000000 y:1.000000 2010-04-15 23:44:48.817 RubixGL[4203:20b] vertex[1][1] x:-26.000000 y:26.000000 2010-04-15 23:44:48.826 RubixGL[4203:20b] vertex[2][2] x:-1.000000 y:1.000000 2010-04-15 23:44:48.829 RubixGL[4203:20b] vertex[3][3] x:-1.000000 y:26.000000 2010-04-15 23:44:48.830 RubixGL[4203:20b] Next Face 2010-04-15 23:44:48.830 RubixGL[4203:20b] vertex[0][4] x:1.000000 y:1.000000 2010-04-15 23:44:48.832 RubixGL[4203:20b] vertex[1][5] x:1.000000 y:26.000000 2010-04-15 23:44:48.837 RubixGL[4203:20b] vertex[2][6] x:26.000000 y:1.000000 2010-04-15 23:44:48.838 RubixGL[4203:20b] vertex[3][7] x:26.000000 y:26.000000 2010-04-15 23:44:48.848 RubixGL[4203:20b] Next Face 2010-04-15 23:44:48.849 RubixGL[4203:20b] vertex[0][8] x:-26.000000 y:-26.000000 2010-04-15 23:44:48.850 RubixGL[4203:20b] vertex[1][9] x:-26.000000 y:-1.000000 2010-04-15 23:44:48.851 RubixGL[4203:20b] vertex[2][10] x:-1.000000 y:-26.000000 2010-04-15 23:44:48.852 RubixGL[4203:20b] vertex[3][11] x:-1.000000 y:-1.000000 2010-04-15 23:44:48.853 RubixGL[4203:20b] Next Face 2010-04-15 23:44:48.853 RubixGL[4203:20b] vertex[0][12] x:1.000000 y:-26.000000 2010-04-15 23:44:48.854 RubixGL[4203:20b] vertex[1][13] x:1.000000 y:-1.000000 2010-04-15 23:44:48.854 RubixGL[4203:20b] vertex[2][14] x:26.000000 y:-26.000000 2010-04-15 23:44:48.855 RubixGL[4203:20b] vertex[3][15] x:26.000000 y:-1.000000 any ideas?

    Read the article

  • iPhone OpenGL Template is cheating?

    - by carrots
    XCode's OpenGL template seems to be cheating to solve this "stretched" viewport problem I've been trying to understand for the last 3 hours. In the iphone "OpenGL ES Application" template, the colorful square that bounces up and down on the screen is not really a square at all! ES1Renderer.m (the ES2 file as well) static const GLfloat squareVertices[] = { -0.5f, -0.33f, 0.5f, -0.33f, -0.5f, 0.33f, 0.5f, 0.33f, }; But it comes out looking square on the device/simulator due to the stretching/squashing effect of a non-square viewport. I tried to fix it by fiddling with glFrustumf() but that doesn't seem to change the aspect ratio. I was able to get things looking good (not-stretched) when I fed glViewport() with a 1:1 widht:height.. But this doesn't seem like the answer because it offsets the viewport placement. What's the right way to correct for this stretching and why doesn't XCode do it that way?

    Read the article

  • Python OpenGL Can't Redraw Scene

    - by RobbR
    I'm getting started with OpenGL and shaders using GLUT and PyOpenGL. I can draw a basic scene but for some reason I can't get it to update. E.g. any changes I make during idle(), display(), or reshape() are not reflected. Here are the methods: def display(self): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ) glMatrixMode(GL_MODELVIEW) glLoadIdentity() glUseProgram(self.shader_program) self.m_vbo.bind() glEnableClientState( GL_VERTEX_ARRAY ) glVertexPointerf(self.m_vbo) glDrawArrays(GL_TRIANGLES, 0, len(self.m_vbo)) glutSwapBuffers() glutReportErrors() def idle(self): test_change += .1 self.m_vbo = vbo.VBO( array([ [ test_change, 1, 0 ], # triangle [ -1,-1, 0 ], [ 1,-1, 0 ], [ 2,-1, 0 ], # square [ 4,-1, 0 ], [ 4, 1, 0 ], [ 2,-1, 0 ], [ 4, 1, 0 ], [ 2, 1, 0 ], ],'f') ) glutPostRedisplay() def begin(self): glutInit() glutInitWindowSize(400, 400) glutCreateWindow("Simple OpenGL") glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB) glutDisplayFunc(self.display) glutReshapeFunc(self.reshape) glutMouseFunc(self.mouse) glutMotionFunc(self.motion) glutIdleFunc(self.idle) self.define_shaders() glutMainLoop() I'd like to implement a time step in idle() but even basic changes to the vertices or tranlastions and rotations on the MODELVIEW matrix don't display. It just puts up the initial state and does not update. Am I missing something?

    Read the article

  • Trouble shooting openGL text textures not showing up correctly cross platform

    - by Michael Minerva
    I have been tasked with solving a problem that is outside of my domain of knowledge and I was hoping I could get some troubleshooting advice from someone more experienced with openGL (I have very little experience with openGL). We are working on a cross platform application that is implemented in a common lisp implementation called ccl. In this application we have a need to display some 3D objects that display text. On the mac, all of the text displays fine but on the PC instead of displaying the text it displays some other texture. At first I thought that maybe the wrong texture was just being referenced so I tried changing the texture number but none of the textures in the list appeared to be the text (or if it was the texture was distorted and did not look like text). I know this problem is very vague and I am not looking for someone to post a solution, but I was wondering if people could suggest places I might look to try and get a handle on this issue.

    Read the article

  • OpenGL game written in C with a Cocoa front-end I want to port to Windows

    - by Philip
    Hello, I'm wondering if someone could offer me some tips on how to go about this. I have a MacOS X OpenGL game that is written in very portable C with the exception of the non-game-play GUI. So in Cocoa I set up the window and OpenGL context, manage preferences, registration, listen for keystrokes etc. But all of the drawing and processing of input is handled in nice portable C. So I want to port to Windows. I figured the obvious way to go about was to use the Win32 api. Then I started to read a primer on Win32 and began to wonder if maybe life isn't too short. Can I do this in C# (without converting the backend to C#)? I'd rather devote the time to learning C# than Win32. Any suggestions would be most welcome. I really don't know a lick about Windows. The last version I regularly used was 3.1...

    Read the article

  • I just don't get why there is a glMatrixMode in OpenGL

    - by René Nyffenegger
    I just don't understand what OpenGL's glMatrixMode is for. As far as I can see, when glMatrixMode(GL_MODELVIEW) is called, it is followed by glVertex, glTranslate, glRotate and the like, that is, OpenGL commands that place some objects somewhere in the space. On the other hand, if glOrtho or glFrustum or gluProjection is called (ie how the placed objects are rendered), it has a preceeding call of glMatrixMode(GL_PROJECTION). I guess what I have written so far is an assumption on which someone will prove me wrong, but is not the point of using different *Matrix Mode*s exactly because there are different kinds of gl-functions: those concerned with placing objects and those with how the objects are rendered? So, if someone could shed some light on this issue, I'd certainly appreciate it.

    Read the article

  • How to properly use glDiscardFramebufferEXT

    - by Rafael Spring
    This question relates to the OpenGL ES 2.0 Extension EXT_discard_framebuffer. It is unclear to me which cases justify the use of this extension. If I call glDiscardFramebufferEXT() and it puts the specified attachable images in an undefined state this means that either: - I don't care about the content anymore since it has been used with glReadPixels() already, - I don't care about the content anymore since it has been used with glCopyTexSubImage() already, - I shouldn't have made the render in the first place. Clearly, only the 1st two cases make sense or are there other cases in which glDiscardFramebufferEXT() is useful? If yes, which are these cases?

    Read the article

  • glFramebufferTexture2D performance

    - by nornagon
    I'm doing heavy computation using the GPU, which involves a lot of render-to-texture operations. It's an iterative computation, so there's a lot of rendering to a texture, then rendering that texture to another texture, then rendering the second texture back to the first texture and so on, passing the texture through a shader each time. My question is: is it better to have a separate FBO for each texture I want to render into, or should I rather have one FBO and bind the target texture using glFramebufferTexture2D each time I want to change render target? My platform is OpenGL ES 2.0 on the iPhone.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >