Search Results

Search found 3627 results on 146 pages for 'opengl es 2 0'.

Page 30/146 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Opengl: Problem with masking

    - by Shaza
    Hey all, I'm working on creating a hole in a wall using masking in opengl, my code is quit simple like this, //Draw the mask glEnable(GL_BLEND); glBlendFunc(GL_DST_COLOR,GL_ZERO); glBindTexture(GL_TEXTURE_2D, texture[3]); glBegin(GL_QUADS); glTexCoord2d(0,0); glVertex3f(-20,40,-20); glTexCoord2d(0,1);glVertex3f(-20,40,40); glTexCoord2d(1,1);glVertex3f(20,40,40); glTexCoord2d(1,0);glVertex3f(20,40,-20); glEnd(); //Draw the Texture glBlendFunc(GL_ONE, GL_ONE); glBindTexture(GL_TEXTURE_2D, texture[2]); glBegin(GL_QUADS); glTexCoord2d(0,0); glVertex3f(-20,40,-20); glTexCoord2d(0,1);glVertex3f(-20,40,40); glTexCoord2d(1,1);glVertex3f(20,40,40); glTexCoord2d(1,0);glVertex3f(20,40,-20); glEnd(); The problem is, I got the hole in the wall correctly but it's semi transparent, I'm getting like black shade over it, also I can see through it. Here's a photo for what I'm getting: any suggestions?

    Read the article

  • How to run OpenGL code with out compiling?

    - by Ole Jak
    So I have some openGL code (such code for example) /* FUNCTION: YCamera :: CalculateWorldCoordinates ARGUMENTS: x mouse x coordinate y mouse y coordinate vec where to store coordinates RETURN: n/a DESCRIPTION: Convert mouse coordinates into world coordinates */ void YCamera :: CalculateWorldCoordinates(float x, float y, YVector3 *vec) { // START GLint viewport[4]; GLdouble mvmatrix[16], projmatrix[16]; GLint real_y; GLdouble mx, my, mz; glGetIntegerv(GL_VIEWPORT, viewport); glGetDoublev(GL_MODELVIEW_MATRIX, mvmatrix); glGetDoublev(GL_PROJECTION_MATRIX, projmatrix); real_y = viewport[3] - (GLint) y - 1; // viewport[3] is height of window in pixels gluUnProject((GLdouble) x, (GLdouble) real_y, 1.0, mvmatrix, projmatrix, viewport, &mx, &my, &mz); /* 'mouse' is the point where mouse projection reaches FAR_PLANE. World coordinates is intersection of line(camera->mouse) with plane(z=0) (see LaMothe 306) Equation of line in 3D: (x-x0)/a = (y-y0)/b = (z-z0)/c Intersection of line with plane: z = 0 x-x0 = a(z-z0)/c <=> x = x0+a(0-z0)/c <=> x = x0 -a*z0/c y = y0 - b*z0/c */ double lx = fPosition.x - mx; double ly = fPosition.y - my; double lz = fPosition.z - mz; double sum = lx*lx + ly*ly + lz*lz; double normal = sqrt(sum); double z0_c = fPosition.z / (lz/normal); vec->x = (float) (fPosition.x - (lx/normal)*z0_c); vec->y = (float) (fPosition.y - (ly/normal)*z0_c); vec->z = 0.0f; } I want to run It but with out precompiling. Is there any way to do such thing

    Read the article

  • Glitch when moving camera in OpenGL

    - by CG
    I am writing a tile-based game engine for the iPhone and it works in general apart from the following glitch. Basically, the camera will always keep the player in the centre of the screen, and it moves to follow the player correctly and draws everything correctly when stationary. However whilst the player is moving, the tiles of the surface the player is walking on glitch as shown: Compared to the stationary (correct): Does anyone have any idea why this could be? Thanks for the responses so far. Floating point error was my first thought also and I tried slightly increasing the size of the tiles but this did not help. Changing glClearColor to red still leaves black gaps so maybe it isn't floating point error. Since the tiles in general will use different textures, I don't know if vertex arrays can be used (I always thought that the same texture had to be applied to everything in the array, correct me if I'm wrong), and I don't think VBO is available in OpenGL ES. Setting the filtering to nearest neighbour improved things but the glitch still happens every ten frames or so, and the pixelly result means that this solution is not viable anyway. The main difference between what I'm doing now and what I've done in the past is that this time I am moving the camera rather than the stationary objects in the world (i.e. the tiles, the player is still being moved). The code I'm using to move the camera is: void Camera::CentreAtPoint( GLfloat x, GLfloat y ) { glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(x - size.x / 2.0f, x + size.x / 2.0f, y + size.y / 2.0f, y - size.y / 2.0f, 0.01f, 5.0f); glMatrixMode(GL_MODELVIEW); } Is there a problem with doing things this way and if so is there a solution?

    Read the article

  • returning opengl display callback in D

    - by Max
    I've written a simple hello world opengl program in D, using the converted gl headers here. My code so far: import std.string; import c.gl.glut; Display_callback display() { return Display_callback // line 7 { return; // just display a blank window }; } // line 10 void main(string[] args) { glutInit(args.length, args); glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE); glutInitWindowSize(800,600); glutCreateWindow("Hello World"); glutDisplayFunc(display); glutMainLoop(); } My problem is with the display() function. glutDisplayFunc() expects a function that returns a Display_callback, which is typedef'd as typedef GLvoid function() Display_callback;. When I try to compile, dmd says line 7: found '{' when expecting ';' following return statement line 10: unrecognized declaration How do I properly return the Display_callback here? Also, how do I change D strings and string literals into char*? My calls to glutInit and glutCreateWindow don't like the D strings they're getting. Thanks for your help.

    Read the article

  • Cleaning up when exiting an OpenGL app

    - by Daniel
    This might be a dumb question but I've spent some time asking Google and haven't been able to find anything. I have an an OSX OpenGL app I'm trying to modify. When I create the app a whole bunch of initialisation functions are called -- including methods where I can specify my own mouse and keyboard handlers etc. For example: glutInit(&argc, argv); glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA); glutInitWindowPosition(100, 100); glutInitWindowSize(700, 700); glutCreateWindow("Map Abstraction"); glutReshapeFunc(resizeWindow); glutDisplayFunc(renderScene); glutIdleFunc(renderScene); glutMouseFunc(mousePressedButton); glutMotionFunc(mouseMovedButton); glutKeyboardFunc(keyPressed); At some point I pass control to glutMainLoop and my application runs. In the process of running I create a whole bunch of objects. I'd like to clean these up. Is there any way I can tell GLUT to call a cleanup method before it quits?

    Read the article

  • Android opengl releasing textures

    - by user1642418
    I have a bit of a problem. I am developing a game for android + engine and I got stuck. I am getting OpenGL out of memory error and either app crashes or phone hangs after loading a scene multiple times. For example: app launches, shows main menu, 1st level/scene is loaded. Then I go back to main menu, and repeat. It doesnt matter which scene I load, after 4-6 times the error occurs. Some background: Each time when scene is loaded all the resources are released and upon first frame render - needed stuff gets loaded. The performance is more or less ok. Note that I am calling glDeleteTexture method, but I think its not doing its job and releasing memory. Thing is that -when I minimize and open it again - problem doesn't occur, but almost the same things are executed. Problem doesn't occur. This way android releases memory. How do I release/get rid of unused textures properly? This happens on HTC Desire HD ( ice cream sandwich 4.0.4) . Other games works fine, so I bet this is not the problem in ROM.

    Read the article

  • OpenGL Pixel Format Attributes (NSOpenGLPixelFormatAttibutes) explanation?

    - by nacho4d
    Hi, I am not new to OpenGL, but not an expert. Many tutorials teach how to draw, 3D, 2D, projections, orthogonal, etc, but How about setting a the view? (NSOpenGLView in Cocoa, Macs). For example I have this: - (id) initWithFrame: (NSRect) frame { GLuint attribs[] = { //PF: PixelAttibutes NSOpenGLPFANoRecovery, NSOpenGLPFAWindow, NSOpenGLPFAAccelerated, NSOpenGLPFADoubleBuffer, NSOpenGLPFAColorSize, 24, NSOpenGLPFAAlphaSize, 8, NSOpenGLPFADepthSize, 24, NSOpenGLPFAStencilSize, 8, NSOpenGLPFAAccumSize, 0, 0 }; NSOpenGLPixelFormat* fmt = [[NSOpenGLPixelFormat alloc] initWithAttributes: (NSOpenGLPixelFormatAttribute*) attribs]; return self = [super initWithFrame:frame pixelFormat: [fmt autorelease]]; } And I don't understand very well their usage, specially when combining them. For example: If I want my view to be capable of full screen should I write NSOpenGLPFAFullScreen only ? or both? (by capable I mean not always in full screen) Regarding Double Buffer, what is this exactly? (Below: Apple's definition) If present, this attribute indicates that only double-buffered pixel formats are considered. Otherwise, only single-buffered pixel formats are considered Regarding Color: if NSOpenGLPFAColorSize is 24 and NSOpenGLPFAColorSize is 8 then it means that alpha and RGB components are treated differently? what happen if I set the former to 32 and the later to 0? Etc, etc,In general how do I learn to set my view from scratch? Thanks in advance. Ignacio.

    Read the article

  • Build OpenGL model in parallel?

    - by Brendan Long
    I have a program which draws some terrain and simulates water flowing over it (in a cheap and easy way). Updating the water was easy to parallelize using OpenMP, so I can do ~50 updates per second. The problem is that even with a small amounts of water, my draws per second are very very low (starts at 5 and drops to around 2 once there's a significant amount of water). It's not a problem with the video card because the terrain is more complicated and gets drawn so quickly that boost::timer tells me that I get infinity draws per second if I turn the water off. It may be related to memory bandwidth though (since I assume the model stays on the card and doesn't have to be transfered every time). What I'm concerned about is that on every draw, I'm calling glVertex3f() about a million times (max size is 450*600, 4 vertices each), and it's done entirely sequentially because Glut won't let me call anything in parallel. So.. is if there's some way of building the list in parallel and then passing it to OpenGL all at once? Or some other way of making it draw this faster? Am I using the wrong method (besides the obvious "use less vertices")?

    Read the article

  • OpenGL GL_LINES enpoints not joining

    - by old-school rules
    I'm having problems with the GL_LINES block... the lines in the sample below do not connect on the ends (although sometimes it randomly decides to connect a corner or two). Instead, the endpoints come within 1 pixel of one another (leaving a corner that is not fully squared; if that makes sense). It is a simple block to draw a solid 1-pixel rectangle. glBegin(GL_LINES); glColor3b(cr, cg, cb); glVertex3i(pRect->left, pRect->top, 0); glVertex3i(pRect->right, pRect->top, 0); glVertex3i(pRect->right, pRect->top, 0); glVertex3i(pRect->right, pRect->bottom, 0); glVertex3i(pRect->right, pRect->bottom, 0); glVertex3i(pRect->left, pRect->bottom, 0); glVertex3i(pRect->left, pRect->bottom, 0); glVertex3i(pRect->left, pRect->top, 0); glEnd(); The sample below seems to correct the problem, giving me sharp, square corners; but I can't accept it because I don't know why it's acting this way... glBegin(GL_LINES); glColor3b(cr, cg, cb); glVertex3i(pRect->left, pRect->top, 0); glVertex3i(pRect->right + 1, pRect->top, 0); glVertex3i(pRect->right, pRect->top, 0); glVertex3i(pRect->right, pRect->bottom + 1, 0); glVertex3i(pRect->right, pRect->bottom, 0); glVertex3i(pRect->left - 1, pRect->bottom, 0); glVertex3i(pRect->left, pRect->bottom, 0); glVertex3i(pRect->left, pRect->top - 1, 0); glEnd(); Any OpenGL programmers out there that can help, I would appreciate it :)

    Read the article

  • State in OpenGL

    - by newprogrammer
    This is some simple code that draws to the screen. GLuint vbo; glGenBuffers(1, &vbo); glUseProgram(myProgram); glBindBuffer(GL_ARRAY_BUFFER, vbo); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); //Fill up my VBO with vertex data glBufferData(GL_ARRAY_BUFFER, sizeof(vertexes), &vertexes, GL_STATIC_DRAW); /*Draw to the screen*/ This works fine. However, I tried changing the order of some GL calls like so: GLuint vbo; glGenBuffers(1, &vbo); glUseProgram(myProgram); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); //Now comes after the setting of the vertex attributes. glBindBuffer(GL_ARRAY_BUFFER, vbo); //Fill up my VBO with vertex data glBufferData(GL_ARRAY_BUFFER, sizeof(vertexes), &vertexes, GL_STATIC_DRAW); /*Draw to the screen*/ This crashes my program. Why does there need to be a VBO bound to GL_ARRAY_BUFFER while I'm just setting up vertex attributes? To me, what glVertexAttribPointer does is just set up the format of vertexes that OpenGL will eventually use to draw things. It is not specific to any VBO. Thus, if multiple VBOs wanted to use the same vertex format, you would not need to format the vertexes in the VBO again.

    Read the article

  • Fokedvenc BI és DW blogjaim 7: Oracle Data Warehousing

    - by Fekete Zoltán
    A következo tartalmas blogot ajánlom a nyájas olvasó figyelmébe: The Data Warehouse Insider: http://blogs.oracle.com/datawarehousing/ Az adattárház általános fogalmaitól és a bevezetések és tervezés "best practice" legjobb gyakorlati tapasztalatokig. Témák: csillagsémák, particionálás, OLAP, 3NF, párhuzamos feldolgozások, adatbetöltés, ETL-ELT, adatmodellek, rendezvények, Exadata, Database Machine, tömörítés, adatbányászat, ügyféltörténetek,...

    Read the article

  • Constant opacity with glBlendFunc on iPhone

    - by Jeff Johnson
    What glBlendFunc should I use to ensure that the opacity of my drawing is always the same? When I use glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) and multiple images are drawn on top of each other, the result is more and more opaque until it's completely opaque after a certain number of imgaes. The closest I have come is to use glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE_MINUS_SRC_ALPHA) which maintains a constant opacity no matter how many images are on top of each other, although there is a slight variation in opacity if the images overlap each other. Any other render states I should consider trying? Any other ideas? I am making a drawing app for my kid and I don't want the images (brush) they draw to cover up the background. Heres the closest I've got: I want to have it so that the overlap part of the circles is the same color and opacity as the center part of the circle. I am using cocos2d iphone v. 0.99

    Read the article

  • Setting ModelView matrix using rotate, translate, etc.. vs setting manual matrix

    - by guymic
    When setting the ModelView matrix you normally go through several transformations from the identity matrix. for example: glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glRotatef(270.0f, 0.0f, 0.0f, 1.0f); glTranslatef(-rect.size.height / 2, -rect.size.width / 2, 0.0f); Instead of doing those operations one after the other (assume there are more than two), wouldn't it be more efficient to simply pre-calculate the resulting matrix and set the ModelView matrix to this manual matrix?

    Read the article

  • Getting the MODELVIEW matrix...

    - by james.ingham
    Hi, I've been pulling my hair out trying to get some matrix calculations working properly and started to wonder. If I have the following: glPushMatrix(); float m[16]; glGetFloatv(GL_MODELVIEW_MATRIX, m); glPopMatrix(); What should I expect the values of m to equal? Currently I'm getting these values and I'm confused as to where they're coming from: -1, 0, 0, 0, 0, -0.6139, 0.7893522, 0, 0, 0.789352238, 0.61394, 0, 0, 0.0955992, -1.344529, 1, I'm assuming there is something which affects this, but I'm not sure what. Could anyone help? I've tried changing pretty much anything but everytime I push the matrix stack I always get this matrix straight away! I don't think this makes a difference but I'm using OpenGLES. Thanks

    Read the article

  • 3D Texture mapping

    - by Joe Cannatti
    In an .obj, file it is possible to specify 3 values for a vt line. vt 0.769645 0.729072 0.00000000 The .obj spec says its for "depth". What does this actually do and when is it useful?

    Read the article

  • EAGLView orientation changes and strange buffering

    - by Drew
    I'm writing an app that offloads some heavy drawing into an EAGLView, and it does some lightweight stuff in UIKit on top. It seems that the GL render during an orientation change is cached somewhere, and I can't figure out how to get access to it. Specifically: After an orientation change, calling glClear(GL_COLOR_BUFFER_BIT) isn't enough to clear the GL display (drawing is cached somewhere?) How can I clear this cache? After an orientation change, glReadPixel() can no longer access pixels drawn before the orientation change. How can I get access to where this is stored?

    Read the article

  • Corrupted image if variable is not static

    - by Jaka Jancar
    I'm doing the following: static GLfloat vertices[3][3] = { {0.0, 1.0, 0.0}, {1.0, 0.0, 0.0}, {-1.0, 0.0, 0.0} }; glColor4ub(255, 0, 0, 255); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, vertices); glDrawArrays(GL_TRIANGLES, 0, 9); glDisableClientState(GL_VERTEX_ARRAY); This works ok: However, if I remove static from vertices and therefore re-create the data on the stack on each rendering, I get the following: This happens both on the simulator and on the device. Should I be keeping the variables around after I call glDrawArrays?

    Read the article

  • OpenGLES - Rendering a background image only once and not wiping it

    - by chaosbeaker
    Hello, first time asking a question here but been watching others answers for a while. My own question is one for improving the performance of my program. Currently I'm wiping the viewFrameBuffer on each pass through my program and then rendering the background image first followed by the rest of my scene. I was wondering how I go about rendering the background image once, and only wiping the rest of the scene for updating/re-rendering. I tried using a seperate buffer but I'm not sure how to present this new buffer to the render buffer. // Set the current EAGLContext and bind to the framebuffer. This will direct all OGL commands to the // framebuffer and the associated renderbuffer attachment which is where our scene will be rendered [EAGLContext setCurrentContext:context]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); // Define the viewport. Changing the settings for the viewport can allow you to scale the viewport // as well as the dimensions etc and so I'm setting it for each frame in case we want to change i glViewport(0, 0, screenBounds.size.width , screenBounds.size.height); // Clear the screen. If we are going to draw a background image then this clear is not necessary // as drawing the background image will destroy the previous image glClearColor(0.0f, 1.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // Setup how the images are to be blended when rendered. This could be changed at different points during your // render process if you wanted to apply different effects glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); switch (currentViewInt) { case 1: { [background render:CGPointMake(240, 0) fromTopLeftBottomRightCenter:@"Bottom"]; // Other Rendering Code }} // Bind to the renderbuffer and then present this image to the current context glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; Hopefully by solving this I'll also be able to implement another buffer just for rendering particles as I can set them to always use a black background as their alpha source. Any help is greatly appreciated

    Read the article

  • What does setting the GL color before doing a texture mapping operation do?

    - by quixoto
    I am looking at some sample code in a book that creates a jittered antialiasing effect by repeatedly rendering a scene (at different offsets) onto a offscreen texture, then using that texture to repeatedly draw a quad in the main view with some blend stuff set up. To accumulate the color "correctly", the code is setting the color like so: glColor4f(f, f, f, 1); where f is 1.0/number_of_samples, and then binding the offscreen texture and rendering it. Since textures come with their own color and alpha data, what is the effect (mathematically and intuitively) that setting the overall "color" in advance achieves? Thanks.

    Read the article

  • Set brightness, contrass on selected image

    - by Viral
    hi, I want to set brightness and contrast on the image that is selected from the photo library of iphone, is there any way to do the same using/not using open GLES. I've got some code from the developer.apple.com but that is for a single image and i can't use images from photo library, do any one know please let me know how to this. regards viral.

    Read the article

  • How to set image on EAGL VIEW

    - by Viral
    hi, I want to put images from my photo library in to the EAGL view for some further processing. The image that are already in our resources folder will be taken by itself but mltiple or images from photo library can't. So any one knows how to put image on EAGL view in open GLES. Regtards viral

    Read the article

  • Texture repeats even with GL_CLAMP_TO_EDGE set

    - by Lliane
    Hi, i'm trying to put a translucing texture on a face which uses points 1 to 4 (don't mind the numbers) on the following screenshot Sadly as you can see the texture repeats herself in both dimensions, I tried to switch the TEXTURE_WRAP_S from REPEAT to CLAMP_to_EDGE but it doesn't change anything. Texture loading code is here : gl.glBindTexture(gl.GL_TEXTURE_2D, mTexture.get(4)); gl.glActiveTexture(4); gl.glTexParameterf(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MIN_FILTER, gl.GL_LINEAR); gl.glTexParameterf(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER, gl.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); gl.glTexImage2D(gl.GL_TEXTURE_2D, 0, gl.GL_RGBA, shadowbmp.width, shadowbmp.height, 0, gl.GL_RGBA, gl.GL_UNSIGNED_SHORT_4_4_4_4, shadowbmp.buffer); Texture coordinates are the following : float shadow_bot_text[] = { 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; Thanks

    Read the article

  • How to get X,Y,Z rotations of vertices on a sphere at the origin?

    - by Stoff81
    Hey, I have a sphere in my game world and i would like to place a plane at each vertex on this sphere for debugging purposes. The planes should be orientated so that they lie flat against the sphere (perpendicular to the normals). The sphere is located at the origin, so all the vertices are relative to that. If my thinking is correct, i should be able to do this using the positions of the vertices and some simple trigonometry. I have tried a few combinations but have had no joy yet. I would greatly appreciate some help on this. Thanks. Here is my code: float xRot = RADIANS_TO_DEGREES(sinf(vertex.x/PLANET_RADIUS)); float yRot = RADIANS_TO_DEGREES(cosf(vertex.y/PLANET_RADIUS)); glRotatef(xRot, 1.0, 0, 0); glRotatef(yRot, 0, 1.0, 0);

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >