Search Results

Search found 3875 results on 155 pages for 'opengl es lighting'.

Page 30/155 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • For normal mapping, why can we not simply add the tangent normal to the surface normal?

    - by sebf
    I am looking at implementing bump mapping (which in all implementations I have seen is really normal mapping), and so far all I have read says that to do this, we create a matrix to convert from world-space to tangent-space, in order to transform the lights and eye direction vectors into tangent space, so that the vectors from the normal map may be used directly in place of those passed through from the vertex shader. What I do not understand though, is why we cannot just use the normalised sum of the sampled-normal vector, and the surface-normal? (assuming we already transform and pass through the surface normal for the existing lighting functions) Take the diagram below; the normal is simply the deviation from the 'reference normal' for any given coordinate system, correct? And transforming the surface normal of a mapped surface from world space to tangent space makes it equivalent to the tangent space 'reference normal', no? If so, why do we transform all lighting vectors into tangent space, instead of simply transforming the sampled tangent once in the pixel shader?

    Read the article

  • Újabb házassági terv az adatbázis piacon: Sybase és SAP

    - by Fekete Zoltán
    Az Oracle az utóbbi évben technológiai és alkalmazás oldalon több mint 50 céget vásárolt meg, legutóbb a hardver-operációs rendszer, Java, IDM, virtualizáció és számos más téren is innovatív Sun céget. Ráadásul az Oracle best-of-breed azaz iparági vezeto cégekkel és mogoldásokkal erosíti a portfólióját. Az Oracle hosszú évek óta az adattárházak (data wrehouse) területén is a Gartner szerint a piacvezetok mágikus négyzetében van. Ennek a területnek vezeto megoldása az Oracle Database optimalizált hardveren futtatása az Exadata / Database Machine területen. Az Oracle Database mind tranzakciókezelésre mind adattárház feldolgozásra, mind ezen megoldások egy környezetben futtatására optimalizált megoldás. Az SAP korábban meglehetosen elítélte az Oracle best-of-breed felvásárlási stratégiáját, mondván az nem vezet semmire. :) Most a megmaradék önálló cégek közül a Sybase-t szemelte ki. A BBC híre. Kicsit soknak tunik az 5,8 milliárd dollár? Érdekes, hogy a cikk szerint a felvásárlási terv hírére az SAP részvény árfolyam 40 centtel esett.

    Read the article

  • OpenGL alpha blending issue with back face visible.

    - by Max
    I'm trying to display "transparent" surfaces (not closed volumes) with both the front face and back face are visible (not culled). For example displaying a cone or cylinder where the transparency is applied on both sides. There are some visible artifacts where some part of the surface does not seems to be handling the alpha values correctly. The issue it seems is when I (opengl) is trying to apply the alpha from the front side of the surface to the backside of the surface. (when both the inside/outside of the surface is visible). void init() { glMatrixMode(GL_PROJECTION); gluPerspective( /* field of view in degree */ 40.0, /* aspect ratio */ 1.0, /* Z near */ 1.0, /* Z far */ 10.0); glMatrixMode(GL_MODELVIEW); gluLookAt(0.0, 0.0, 5.0, /* eye is at (0,0,5) */ 0.0, 0.0, 0.0, /* center is at (0,0,0) */ 0.0, 1.0, 0.); /* up is in positive Y direction */ glTranslatef(0.0, 0.6, -1.0); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glLightfv(GL_LIGHT0, GL_AMBIENT, light0_ambient); glLightfv(GL_LIGHT0, GL_DIFFUSE, light0_diffuse); glLightfv(GL_LIGHT1, GL_DIFFUSE, light1_diffuse); glLightfv(GL_LIGHT1, GL_POSITION, light1_position); glLightfv(GL_LIGHT2, GL_DIFFUSE, light2_diffuse); glLightfv(GL_LIGHT2, GL_POSITION, light2_position); glClearDepth(1.0f); glEnable(GL_DEPTH_TEST); //glEnable(GL_CULL_FACE); glFrontFace( GL_CW ); glShadeModel(GL_SMOOTH); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); } void draw () { static GLfloat amb[] = {0.4f, 0.4f, 0.4f, 0.0f}; static GLfloat dif[] = {1.0f, 1.0f, 1.0f, 0.0f}; static GLfloat back_amb[] = {0.4f, 0.4f, 0.4f, 1.0f}; static GLfloat back_dif[] = {1.0f, 1.0f, 1.0f, 1.0f}; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_LIGHT1); glDisable(GL_LIGHT2); amb[3] = dif[3] = 0.5f;// cos(s) / 2.0f + 0.5f; glMaterialfv(GL_FRONT, GL_AMBIENT, amb); glMaterialfv(GL_FRONT, GL_DIFFUSE, dif); glMaterialfv(GL_BACK, GL_AMBIENT, back_amb); glMaterialfv(GL_BACK, GL_DIFFUSE, back_dif); glPushMatrix(); glTranslatef(-0.3f, -0.3f, 0.0f); glRotatef(angle1, 1.0f, 5.0f, 0.0f); glutSolidCone(1.0, 1.0, 50, 2 ); glPopMatrix(); ///... SwapBuffers(wglGetCurrentDC()); // glutSwapBuffers(); } The code is based on : http://www.sgi.com/products/software/opengl/examples/glut/examples/source/blender.c tinyurled links to 2 images on flickr showing the issue (but from out production code, not the above code, but both have the same kind of problems): http://flic.kr/p/99soxy and http://flic.kr/p/99pg18 Thanks. Max.

    Read the article

  • OpenGL texture misaligned on quad

    - by user308226
    I've been having trouble with this for a while now, and I haven't gotten any solutions that work yet. Here is the problem, and the specifics: I am loading a 256x256 uncompressed TGA into a simple OpenGL program that draws a quad on the screen, but when it shows up, it is shifted about two pixels to the left, with the cropped part appearing on the right side. It has been baffling me for the longest time, people have suggested clamping and such, but somehow I think my problem is probably something really simple, but I just can't figure out what it is! Here is a screenshot comparing the TGA (left) and how it appears running in the program (right) for clarity. Also take note that there's a tiny black pixel on the upper right corner, I'm hoping that's related to the same problem. Here's the code for the loader, I'm convinced that my problem lies in the way that I'm loading the texture. Thanks in advance to anyone who can fix my problem. bool TGA::LoadUncompressedTGA(char *filename,ifstream &texturestream) { cout << "G position status:" << texturestream.tellg() << endl; texturestream.read((char*)header, sizeof(header)); //read 6 bytes into the file to get the tga header width = (GLuint)header[1] * 256 + (GLuint)header[0]; //read and calculate width and save height = (GLuint)header[3] * 256 + (GLuint)header[2]; //read and calculate height and save bpp = (GLuint)header[4]; //read bpp and save cout << bpp << endl; if((width <= 0) || (height <= 0) || ((bpp != 24) && (bpp !=32))) //check to make sure the height, width, and bpp are valid { return false; } if(bpp == 24) { type = GL_RGB; } else { type = GL_RGBA; } imagesize = ((bpp/8) * width * height); //determine size in bytes of the image cout << imagesize << endl; imagedata = new GLubyte[imagesize]; //allocate memory for our imagedata variable texturestream.read((char*)imagedata,imagesize); //read according the the size of the image and save into imagedata for(GLuint cswap = 0; cswap < (GLuint)imagesize; cswap += (bpp/8)) //loop through and reverse the tga's BGR format to RGB { imagedata[cswap] ^= imagedata[cswap+2] ^= //1st Byte XOR 3rd Byte XOR 1st Byte XOR 3rd Byte imagedata[cswap] ^= imagedata[cswap+2]; } texturestream.close(); //close ifstream because we're done with it cout << "image loaded" << endl; glGenTextures(1, &texID); // Generate OpenGL texture IDs glBindTexture(GL_TEXTURE_2D, texID); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glTexImage2D(GL_TEXTURE_2D, 0, type, width, height, 0, type, GL_UNSIGNED_BYTE, imagedata); delete imagedata; return true; } //Public loading function for TGA images. Opens TGA file and determines //its type, if any, then loads it and calls the appropriate function. //Returns: TRUE on success, FALSE on failure bool TGA::loadTGA(char *filename) { cout << width << endl; ifstream texturestream; texturestream.open(filename,ios::binary); texturestream.read((char*)header,sizeof(header)); //read 6 bytes into the file, its the header. //if it matches the uncompressed header's first 6 bytes, load it as uncompressed LoadUncompressedTGA(filename,texturestream); return true; }

    Read the article

  • OpenGL Colorspace Conversion

    - by Steven Behnke
    Does anyone know how to create a texture with a YUV colorspace so that we can get hardware based YUV to RGB colorspace conversion without having to use a fragment shader? I'm using an NVidia 9400 and I don't see an obvious GL extension that seems to do the trick. I've found examples how to use a fragment shader, but the project I'm working on currently only supports OpenGL 1.1 and I don't have time to convert it to 2.0 and perform all the regression testing necessary. This is also targeting Linux. On other platforms I've been using a MESA extension but it doesn't function on the Nvidia card.

    Read the article

  • Help solving a error in openGL

    - by peiska
    Hi, I am making a group work in openGL, and when i try to open the file that my partner gave me i have this error: -------------- Build: Debug in CG --------------- Linking console executable: bin/Debug/CG ld: library not found for -lGL collect2: ld returned 1 exit status Process terminated with status 1 (0 minutes, 0 seconds) 0 errors, 0 warnings I've seen the same code working in his computer. Is it cause he is working in Windows? and i am working in MacOSX? I am using CodeBlocks IDE. Can anyone help me solving this?

    Read the article

  • [OpenGl] Loading new texture into already defined texture name

    - by Dan Vogel
    I have an OpenGl program in which I am displaying an image using textures. I want to be able to load a new image to be displayed. In my Init function I call: Gl.glGenTextures(1, mTextures); Since only one image will be displayed at time, I am using the same texture name for each image. Each time a new image is loaded I call the following: Gl.glBindTexture(Gl.GL_TEXTURE_2D, mTexture[0]); Gl.glTexImage2D(Gl.GL_TEXTURE_2D, 0, Gl.GL_LUMINANCE, mTexSizeX, mTexSizeY, 0, Gl.GL_LUMINANCE, Gl.GL_UNSIGNED_SHORT, mTexBuffer); Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MIN_FILTER, Gl.GL_LINEAR); Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MAG_FILTER, Gl.GL_LINEAR); The first image will display as expected. However, all images load after the first, display as all black. What am I doing wrong?

    Read the article

  • OpenGL basics: calling glDrawElements once per object

    - by Bethor
    Hi all, continuing on from my explorations of the basics of OpenGL (see this question), I'm trying to figure out the basic principles of drawing a scene with OpenGL. I am trying to render a simple cube repeated n times in every direction. My method appears to yield terrible performance : 1000 cubes brings performance below 50fps (on a QuadroFX 1800, roughly a GeForce 9600GT). My method for drawing these cubes is as follows: done once: set up a vertex buffer and array buffer containing my cube vertices in model space set up an array buffer indexing the cube for drawing as 12 triangles done for each frame: update uniform values used by the vertex shader to move all cubes at once done for each cube, for each frame: update uniform values used by the vertex shader to move each cube to its position call glDrawElements to draw the positioned cube Is this a sane method ? If not, how does one go about something like this ? I'm guessing I need to minimize calls to glUniform, glDrawElements, or both, but I'm not sure how to do that. Full code for my little test : (depends on gletools and pyglet) I'm aware that my init code (at least) is really ugly; I'm concerned with the rendering code for each frame right now, I'll move to something a little less insane for the creation of the vertex buffers and such later on. import pyglet from pyglet.gl import * from pyglet.window import key from numpy import deg2rad, tan from gletools import ShaderProgram, FragmentShader, VertexShader, GeometryShader vertexData = [-0.5, -0.5, -0.5, 1.0, -0.5, 0.5, -0.5, 1.0, 0.5, -0.5, -0.5, 1.0, 0.5, 0.5, -0.5, 1.0, -0.5, -0.5, 0.5, 1.0, -0.5, 0.5, 0.5, 1.0, 0.5, -0.5, 0.5, 1.0, 0.5, 0.5, 0.5, 1.0] elementArray = [2, 1, 0, 1, 2, 3,## back face 4, 7, 6, 4, 5, 7,## front face 1, 3, 5, 3, 7, 5,## top face 2, 0, 4, 2, 4, 6,## bottom face 1, 5, 4, 0, 1, 4,## left face 6, 7, 3, 6, 3, 2]## right face def toGLArray(input): return (GLfloat*len(input))(*input) def toGLushortArray(input): return (GLushort*len(input))(*input) def initPerspectiveMatrix(aspectRatio = 1.0, fov = 45): frustumScale = 1.0 / tan(deg2rad(fov) / 2.0) fzNear = 0.5 fzFar = 300.0 perspectiveMatrix = [frustumScale*aspectRatio, 0.0 , 0.0 , 0.0 , 0.0 , frustumScale, 0.0 , 0.0 , 0.0 , 0.0 , (fzFar+fzNear)/(fzNear-fzFar) , -1.0, 0.0 , 0.0 , (2*fzFar*fzNear)/(fzNear-fzFar), 0.0 ] return perspectiveMatrix class ModelObject(object): vbo = GLuint() vao = GLuint() eao = GLuint() initDone = False verticesPool = [] indexPool = [] def __init__(self, vertices, indexing): super(ModelObject, self).__init__() if not ModelObject.initDone: glGenVertexArrays(1, ModelObject.vao) glGenBuffers(1, ModelObject.vbo) glGenBuffers(1, ModelObject.eao) glBindVertexArray(ModelObject.vao) initDone = True self.numIndices = len(indexing) self.offsetIntoVerticesPool = len(ModelObject.verticesPool) ModelObject.verticesPool.extend(vertices) self.offsetIntoElementArray = len(ModelObject.indexPool) ModelObject.indexPool.extend(indexing) glBindBuffer(GL_ARRAY_BUFFER, ModelObject.vbo) glEnableVertexAttribArray(0) #position glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ModelObject.eao) glBufferData(GL_ARRAY_BUFFER, len(ModelObject.verticesPool)*4, toGLArray(ModelObject.verticesPool), GL_STREAM_DRAW) glBufferData(GL_ELEMENT_ARRAY_BUFFER, len(ModelObject.indexPool)*2, toGLushortArray(ModelObject.indexPool), GL_STREAM_DRAW) def draw(self): glDrawElements(GL_TRIANGLES, self.numIndices, GL_UNSIGNED_SHORT, self.offsetIntoElementArray) class PositionedObject(object): def __init__(self, mesh, pos, objOffsetUf): super(PositionedObject, self).__init__() self.mesh = mesh self.pos = pos self.objOffsetUf = objOffsetUf def draw(self): glUniform3f(self.objOffsetUf, self.pos[0], self.pos[1], self.pos[2]) self.mesh.draw() w = 800 h = 600 AR = float(h)/float(w) window = pyglet.window.Window(width=w, height=h, vsync=False) window.set_exclusive_mouse(True) pyglet.clock.set_fps_limit(None) ## input forward = [False] left = [False] back = [False] right = [False] up = [False] down = [False] inputs = {key.Z: forward, key.Q: left, key.S: back, key.D: right, key.UP: forward, key.LEFT: left, key.DOWN: back, key.RIGHT: right, key.PAGEUP: up, key.PAGEDOWN: down} ## camera camX = 0.0 camY = 0.0 camZ = -1.0 def simulate(delta): global camZ, camX, camY scale = 10.0 move = scale*delta if forward[0]: camZ += move if back[0]: camZ += -move if left[0]: camX += move if right[0]: camX += -move if up[0]: camY += move if down[0]: camY += -move pyglet.clock.schedule(simulate) @window.event def on_key_press(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = True @window.event def on_key_release(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = False ## uniforms for shaders camOffsetUf = GLuint() objOffsetUf = GLuint() perspectiveMatrixUf = GLuint() camRotationUf = GLuint() program = ShaderProgram( VertexShader(''' #version 330 layout(location = 0) in vec4 objCoord; uniform vec3 objOffset; uniform vec3 cameraOffset; uniform mat4 perspMx; void main() { mat4 translateCamera = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, cameraOffset.x, cameraOffset.y, cameraOffset.z, 1.0f); mat4 translateObject = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, objOffset.x, objOffset.y, objOffset.z, 1.0f); vec4 modelCoord = objCoord; vec4 positionedModel = translateObject*modelCoord; vec4 cameraPos = translateCamera*positionedModel; gl_Position = perspMx * cameraPos; }'''), FragmentShader(''' #version 330 out vec4 outputColor; const vec4 fillColor = vec4(1.0f, 1.0f, 1.0f, 1.0f); void main() { outputColor = fillColor; }''') ) shapes = [] def init(): global camOffsetUf, objOffsetUf with program: camOffsetUf = glGetUniformLocation(program.id, "cameraOffset") objOffsetUf = glGetUniformLocation(program.id, "objOffset") perspectiveMatrixUf = glGetUniformLocation(program.id, "perspMx") glUniformMatrix4fv(perspectiveMatrixUf, 1, GL_FALSE, toGLArray(initPerspectiveMatrix(AR))) obj = ModelObject(vertexData, elementArray) nb = 20 for i in range(nb): for j in range(nb): for k in range(nb): shapes.append(PositionedObject(obj, (float(i*2), float(j*2), float(k*2)), objOffsetUf)) glEnable(GL_CULL_FACE) glCullFace(GL_BACK) glFrontFace(GL_CW) glEnable(GL_DEPTH_TEST) glDepthMask(GL_TRUE) glDepthFunc(GL_LEQUAL) glDepthRange(0.0, 1.0) glClearDepth(1.0) def update(dt): print pyglet.clock.get_fps() pyglet.clock.schedule_interval(update, 1.0) @window.event def on_draw(): with program: pyglet.clock.tick() glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) glUniform3f(camOffsetUf, camX, camY, camZ) for shape in shapes: shape.draw() init() pyglet.app.run()

    Read the article

  • OpenGL Motion blur with the accumulation buffer in WxWidgets

    - by Klaus
    Hello, I'm trying to achieve a motion blur effect in my OpenGL application. I read somewhere this solution, using the accumulation buffer: glAccum(GL_MULT, 0.90); glAccum(GL_ACCUM, 0.10); glAccum(GL_RETURN, 1.0); glFlush(); at the end of the render loop. But nothing happens... What am I missing ? Additions after genpfault answer: Indeed I did not asked for an accumulation buffer when I initialized my context. So I tried to pass an array of attributes to the constructor of my wxGLCanvas, as described here: http://docs.wxwidgets.org/2.6/wx_wxglcanvas.html : int attribList[]={ WX_GL_RGBA , WX_GL_DOUBLEBUFFER , WX_GL_MIN_ACCUM_RED, WX_GL_MIN_ACCUM_GREEN, WX_GL_MIN_ACCUM_BLUE, 0} But all I get is a friendly Seg fault. Does someone understand how to use this ? (no problems with int attribList[]={ WX_GL_RGBA , WX_GL_DOUBLEBUFFER , 0})

    Read the article

  • OpenGL billboard interpolation issue

    - by PeanutPower
    I have a billboard quad with a texture mapped onto it. This is basically some text with transparency. The billboard floats forwards and backwards from the camera's perspective. As the billboard moves away (and appears smaller) there is an flickering effect around the edges of the text where there is a stroke border on the actual texture. I think this is because interpolation is needed as the image which is normally X pixels wide is now shown as only a % of X and some pixels need to be merged together. I guess it's doing nearest neighbour or something? Can anyone point me in the right direction for opengl settings to control this, I'm guessing there is some way of preventing this effect from happening by adjusting the method for how the texture is handled ?

    Read the article

  • Multilangual Unicode rendering in opengl

    - by sum1stolemyname
    Hi Folks, I have to extend an OpenGL-Rendering System to support international characters (especially Hebrew, Arabic and cyrillic). Development Platform is Windows(XP|Vista|7), Alas using Embercardero Delphi 2010. I currently use wglOutLineFont(...) to build my font's display list and glCallLists(length(m_Text), UNSIGNED_SHORT, PWchar(m_Text) ) to render my strings. While this is feasable for Latin-1 Characters, building the full unicode character set in advanced is pretty time-consuming (about 8.5 minutes on my machine), so i am looking for a more efficient solution. I thought about limiting the range from u+0020 - u+077f (latin, greek, cyrillic, arbaic and hebrew) to include just the glyphs i need, but that would just be a solution for my current needs, and will become insufficent once other encoding is needed. On the upside, i do not have to worry about left-to right or right-to left direction as our application can handle this already. I would expect this to be a well-known problem, so i would like to ask if there is any reference material on this on the web, or if you could share some insight on this?

    Read the article

  • OpenGL coordinates question

    - by Chonch
    Hey, I have a simple OpenGL drawing. When the user changes the window's size, I want the drawing to maintain it's aspect ration. I accomplished that by setting the glViewport to the maximum rectangle with the appropriate aspect ration whenever the reshape method is called. My problem is that I want to draw a square that will always remain in the top right corner of the window, no matter what the size or shape of the window is. Right now, that square moves around the screen whenever the window is reshaped. Can anyone please explain how to do this? Thank you,

    Read the article

  • My OpenGL game switches Aero DWM Glass off.

    - by marc40000
    Hi ! I wrote a free game a few years ago: http://www.walkover.org. For the lobby and menus, it uses normal dialogs like win32. When the actual game starts it uses OpenGL. Now, on Windows 7, when the actual game starts, it switches windows aero glass off and switches it back on when the game is over. Is there something I can do to prevent this from happening? Some special flags that keep the glass on if it is on? (For newer, I have been using DirectX and this doesn#t happen there.) Maybe some (new) flag I have to specify somewhere? Thx Marc

    Read the article

  • OpenGL Color Interpolation across vertices

    - by gutsblow
    Right now, I have more than 25 vertices that form a model. I want to interpolate color linearly between the first and last vertex. The Problem is when I write the following code glColor3f(1.0,0.0,0.0); vertex3f(1.0,1.0,1.0); vertex3f(0.9,1.0,1.0); . .`<more vertices>; glColor3f(0.0,0.0,1.0); vertex3f(0.0,0.0,0.0); All the vertices except that last one are red. Now I am wondering if there is a way to interpolate color across these vertices without me having to manually interpolate color natively (like how opengl does it automatically) at each vertex since, I will be having a lot more number of colors at various vertices. Any help would be extremely appreciated. Thank you!

    Read the article

  • Quartz 2D or OpenGL ES? Pros and cons in the long term, possibility of migration to other platforms.

    - by fspirit
    Hi all! I'm having a hard time deciding whether to go with Quartz2D or OpenGL for an iPad game. It will be 2D mostly, but effect-intense (simultaneous lighting effects for 10-30 objects, 10-20 simultaneous animations on the screen). So far, assuming i'm equally dumb in both technologies and have to learn them from the ground, i came to this list. (I've read several topics here, on SO, with names like "Quartz or OpenGL", but i'm still left with some questions) Quartz: Better time-to-market, because of ready to use absractions like UIView, UIImageView, CoreAnimation abstractions Open GL ES Closer to hardware, thus, performance is better. App, implemented with OpenGL ES can be easier migrated to Android, MeeGo, Windows Phone, etc. My questions are: How time will it take to rewrite Quartz 2d app to use OpenGL? Lets say it took me 2 man-month to write Quartz app, how much time will i need to rewrite it? (Please, just some subjective opinions, i'll try to summarize them somehow) Regarding the ease of migration to other platforms, when using OpenGL, is it really so? Or efforts when migrating Quartz app from iPhoneOS to Android will be not so much bigger, compared to OpenGL app migration? (Ease of migration is quite important criterion) Regarding OpenGL, should i go with OpenGL 1.1 or 2.0, concerning migration? (Android supports 2.0 through NDK, but dont know whether NDK's use will increase or decrease migration efforts)

    Read the article

  • Simple Android OpenGL App Lag

    - by Eugene
    Hi, I have an Android OpenGL application which simply draws 2D squares (using 2 triangles) and animates them moving down the screen. I essentially do this by running: glLoadIdentity(), then glTranslatef, and finally glDrawElements all in a for loop. (The for loop is to draw all 10 blocks on the screen for every frame). In every drawFrame, the y-position of the blocks increments for the animation. The problem I'm having is strange. I run the application and the animation is laggy and not smooth. Then I re-run the application and I get a smooth animation. If I run again, I may get a smooth animation, or possibly not. Is my method correct, or is there a better way of doing this animation? Thanks for the help!

    Read the article

  • Is there any graphics library in a higher level than OpenGL

    - by Turtle
    Hello, I am looking for a graphics library for 3D reconstruction research to develop my specific viewer based on some library. OpenGL seems in a low level and I have to remake the wheel everywhere. And I also tried VTK(visualization toolkit). However, it seems too abstract that I need to master many conceptions before I start. Is there any other graphics library? I prefer to program in python. So I would like the library has a python wrapper. I think something like O3D would be better. But O3D is for javascript and it seems that Google already stops the development.

    Read the article

  • OpenGL textures trigger error 1281 and strange background behavior

    - by user3714670
    I am using SOIL to apply textures to VBOs, without textures i could change the background and display black (default color) vbos easily, but now with textures, openGL is giving an error 1281, the background is black and some textures are not applied. There must be something i didn't understand about applying/loading the textures. BUt the texture IS applied (nothing else is working though), the error is applied when i try to use the shader program however i checked the compilation of these and no problems were written. Here is the code i use to load textures, once loaded it is kept in memory, it mostly comes from the example of SOIL : texture = SOIL_load_OGL_single_cubemap( filename, SOIL_DDS_CUBEMAP_FACE_ORDER, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); if( texture > 0 ) { glEnable( GL_TEXTURE_CUBE_MAP ); glEnable( GL_TEXTURE_GEN_S ); glEnable( GL_TEXTURE_GEN_T ); glEnable( GL_TEXTURE_GEN_R ); glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glBindTexture( GL_TEXTURE_CUBE_MAP, texture ); std::cout << "the loaded single cube map ID was " << texture << std::endl; } else { std::cout << "Attempting to load as a HDR texture" << std::endl; texture = SOIL_load_OGL_HDR_texture( filename, SOIL_HDR_RGBdivA2, 0, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS ); if( texture < 1 ) { std::cout << "Attempting to load as a simple 2D texture" << std::endl; texture = SOIL_load_OGL_texture( filename, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); } if( texture > 0 ) { // enable texturing glEnable( GL_TEXTURE_2D ); // bind an OpenGL texture ID glBindTexture( GL_TEXTURE_2D, texture ); std::cout << "the loaded texture ID was " << texture << std::endl; } else { glDisable( GL_TEXTURE_2D ); std::cout << "Texture loading failed: '" << SOIL_last_result() << "'" << std::endl; } } and how i apply it when drawing : GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; // glEnableVertexAttribArray(TextureID); glActiveTexture(GL_TEXTURE0); if(SFML) sf::Texture::bind(sfml_texture); else { glBindTexture (GL_TEXTURE_2D, texture); // glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1024, 768, 0, GL_RGB, GL_UNSIGNED_BYTE, &texture); } glUniform1i(TextureID, 0); I am not sure that SOIL is adapted to my program as i want something as simple as possible (i used sfml's texture object which was the best but i can't anymore), but if i can get it to work it would be great. EDIT : After narrowing the code implied by the error, here is the code that provokes it, it is called between texture loading and bos drawing : glEnableClientState(GL_VERTEX_ARRAY); //this gives the error : glUseProgram(this->shaderProgram); if (!shaderLoaded) { std::cout << "Loading default shaders" << std::endl; if(textured) loadShaderProgramm(texture_vertexSource, texture_fragmentSource); else loadShaderProgramm(default_vertexSource,default_fragmentSource); } glm::mat4 Projection = camera->getPerspective(); glm::mat4 View = camera->getView(); glm::mat4 Model = glm::mat4(1.0f); Model[0][0] *= scale_x; Model[1][1] *= scale_y; Model[2][2] *= scale_z; glm::vec3 translate_vec(this->x,this->y,this->z); glm::mat4 object_transform = glm::translate(glm::mat4(1.0f),translate_vec); glm::quat rotation = QAccumulative.getQuat(); glm::mat4 matrix_rotation = glm::mat4_cast(rotation); object_transform *= matrix_rotation; Model *= object_transform; glm::mat4 MVP = Projection * View * Model; GLuint ModelID = glGetUniformLocation(this->shaderProgram, "M"); if(ModelID ==-1) cout << "ModelID not found ..." << endl; GLuint MatrixID = glGetUniformLocation(this->shaderProgram, "MVP"); if(MatrixID ==-1) cout << "MatrixID not found ..." << endl; GLuint ViewID = glGetUniformLocation(this->shaderProgram, "V"); if(ViewID ==-1) cout << "ViewID not found ..." << endl; glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]); glUniformMatrix4fv(ModelID, 1, GL_FALSE, &Model[0][0]); glUniformMatrix4fv(ViewID, 1, GL_FALSE, &View[0][0]); drawObject();

    Read the article

  • OpenGL + cgFX Alpha Blending failure

    - by dopplex
    I have a shader that needs to additively blend to its output render target. While it had been fully implemented and working, I recently refactored and have done something that is causing the alpha blending to not work anymore. I'm pretty sure that the problem is somewhere in my calls to either OpenGL or cgfx - but I'm currently at a loss for where exactly the problem is, as everything looks like it is set up properly for alpha blending to occur. No OpenGL or cg framework errors are showing up, either. For some context, what I'm doing here is taking a buffer which contains screen position and luminance values for each pixel, copying it to a PBO, and using it as the vertex buffer for drawing GL_POINTS. Everything except for the alpha blending appears to be working as expected. I've confirmed both that the input vertex buffer has the correct values, and that my vertex and fragment shaders are outputting the points to the correct locations and with the correct luminance values. The way that I've arrived at the conclusion that the Alpha blending was broken is by making my vertex shader output every point to the same screen location and then setting the pixel shader to always output a value of float4(0.5) for that pixel. Invariably, the end color (dumped afterwards) ends up being float4(0.5). The confusing part is that as far as I can tell, everything is properly set for alpha blending to occur. The cgfx pass has the two following state assignments (among others - I'll put a full listing at the end): BlendEnable = true; BlendFunc = int2(One, One); This ought to be enough, since I am calling cgSetPassState() - and indeed, when I use glGets to check the values of GL_BLEND_SRC, GL_BLEND_DEST, GL_BLEND, and GL_BLEND_EQUATION they all look appropriate (GL_ONE, GL_ONE, GL_TRUE, and GL_FUNC_ADD). This check was done immediately after the draw call. I've been looking around to see if there's anything other than blending being enabled and the blending function being correctly set that would cause alpha blending not to occur, but without any luck. I considered that I could be doing something wrong with GL, but GL is telling me that blending is enabled. I doubt it's cgFX related (as otherwise the GL state wouldn't even be thinking it was enabled) but it still fails if I explicitly use GL calls to set the blend mode and enable it. Here's the trimmed down code for starting the cgfx pass and the draw call: CGtechnique renderTechnique = Filter->curTechnique; TEXUNITCHECK; CGpass pass = cgGetFirstPass(renderTechnique); TEXUNITCHECK; while (pass) { cgSetPassState(pass); cgUpdatePassParameters(pass); //drawFSPointQuadBuff((void*)PointQuad); drawFSPointQuadBuff((void*)LumPointBuffer); TEXUNITCHECK; cgResetPassState(pass); pass = cgGetNextPass(pass); }; and the function with the draw call: void drawFSPointQuadBuff(void* args) { PointBuffer* pointBuffer = (PointBuffer*)args; FBOERRCHECK; glClear(GL_COLOR_BUFFER_BIT); GLERRCHECK; glPointSize(1.0); GLERRCHECK; glEnableClientState(GL_VERTEX_ARRAY); GLERRCHECK; glEnable(GL_POINT_SMOOTH); if (pointBuffer-BufferObject) { glBindBufferARB(GL_ARRAY_BUFFER_ARB, (unsigned int)pointBuffer-BufData); glVertexPointer(pointBuffer-numComp, GL_FLOAT, 0, 0); } else { glVertexPointer(pointBuffer-numComp, GL_FLOAT, 0, pointBuffer-BufData); }; GLERRCHECK; glDrawArrays(GL_POINTS, 0, pointBuffer-numElem); GLboolean testBool; glGetBooleanv(GL_BLEND, &testBool); int iblendColor, iblendDest, iblendEquation, iblendSrc; glGetIntegerv(GL_BLEND_SRC, &iblendSrc); glGetIntegerv(GL_BLEND_DST, &iblendDest); glGetIntegerv(GL_BLEND_EQUATION, &iblendEquation); if (iblendEquation == GL_FUNC_ADD) { cerr << "Correct func" << endl; }; GLERRCHECK; if (pointBuffer-BufferObject) { glBindBufferARB(GL_ARRAY_BUFFER_ARB,0); } GLERRCHECK; glDisableClientState(GL_VERTEX_ARRAY); GLERRCHECK; }; Finally, here is the full state setting of the shader: AlphaTestEnable = false; DepthTestEnable = false; DepthMask = false; ColorMask = true; CullFaceEnable = false; BlendEnable = true; BlendFunc = int2(One, One); FragmentProgram = compile glslf std_PS(); VertexProgram = compile glslv bilatGridVS2();

    Read the article

  • Drawing bezier curve with limited subdivisions in OpenGL

    - by xEnOn
    Is it possible to tell OpenGL to draw a 4 degree (5 control points) bezier curve with 10 subdivisions? I was trying with this: point ctrlpts[] = {..., ..., ..., ...}; glMap1f(GL_MAP1_VERTEX_3, 0, 1, 3, 5, ctrlpts); glBegin(GL_LINE_STRIP); for (i = 0; i <= 30; i++) glEvalCoord1f((GLfloat) i/30.0); glEnd(); But this just draws the curve nicely. I am thinking that I want the algorithm inside the bezier curve to draw only until 10 subdivisions and then stop. The line should look a little facet.

    Read the article

  • matrix image processing in OpenGL CE

    - by iHorse
    im trying to create an image filter in OpenGL CE. currently I am trying to create a series of 4x4 matrices and multiply them together. then use glColorMask and glColor4f to adjust the image accordingly. I've been able to integrate hue rotation, saturation, and brightness. but i am having trouble adding contrast. thus far google hasn't been to helpful. I've found a few matrices but they don't seem to work. do you guys have any ideas?

    Read the article

  • Is OpenGL required for my iPhone game?

    - by Ben X Tan
    On an iPhone: If I am writing a game that has multiple levels, with multiple animations (image sequences), jpg and png (transparent), some full screen and some not, some looped and some played once only. What is the best way of doing it? Each level might have up to 10MB of images. Add on to this music, and video (cut scenes). All 2D graphics, no 3D models. Is OpenGL required? Or can this be achieved with Quartz or Core Animation?

    Read the article

  • Optimizing an iphone app for 3G in landscape with opengl, camera, quartz

    - by Joey
    I have an iphone app that basically uses the camera, an opengl layer, and UIViews (some drawing with Quartz). It runs ok on 3GS, but on the 3G it is unusable. Particularly, when I press a UIButton, it literally takes sometimes 10 seconds to register the press. Shark doesn't do me much good because it crashes when I try to profile even a tiny portion, and I've tried turning off some of the layers to see if they might be obvious contributors to the lag. I've noticed that turning off the camera really helps. I'm wondering if anyone has any familiarity with this and might suggest some likely causes. I had issues with extreme slowdown from running my app in landscape mode and using transforms, so considered that might be a cause, but I'm wondering if hoping for a 3G to run something with all of the above elements is just not really possible considering the camera seems to really cost a lot. The fact that the buttons are horribly delayed in their response makes me think there is something fundamental that I might be missing.

    Read the article

  • Easy framework for OpenGL Shaders in C/C++

    - by Nils
    I just wanted to try out some shaders on a flat image. Turns out that writing a C program, which just takes a picture as a texture and applies, let's say a gaussian blur, as a fragment shader on it is not that easy: You have to initialize OpenGL which are like 100 lines of code, then understanding the GLBuffers, etc.. Also to communicate with the windowing system one has to use GLUT which is another framework.. Turns out that Nvidia's Fx composer is nice to play with shaders.. But I still would like to have a simple C or C++ program which just applies a given fragment shader to an image and displays the result. Does anybody have an example or is there a framework?

    Read the article

  • Having issue with OpenGL 1.0 for HP slate 7

    - by Roy Coder
    I have issue with HP slate when i am trying to draw Line in OpenGL Draw method. But working in other devices. In Hp Slate Green line not drawn properly as like in another device. My Code is: gl.glPushMatrix(); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexFloatBuffer); gl.glColorMask(true, true, true, true); gl.glDepthMask(true); gl.glLineWidth(8.0f); setColor(gl); gl.glDrawArrays(GL10.GL_LINES, 0, fPoints.length / 2); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glPopMatrix(); Suggest me at which place i am wrong or missing something? UpdateImage

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >