Search Results

Search found 2515 results on 101 pages for 'opengl es2'.

Page 43/101 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Should iOS games use a Timer?

    - by ????
    No matter what frameworks we use -- Core Graphics, Cocos2D, OpenGL ES -- to write games, should a timer be used (for games that has animation even when a user doesn't do any input, such as after firing a missile and waiting to see if the UFO is hit)? I read that NSTimer might not get fired until after scheduled time (interval), and CADisplayLink can delay and get fired at a later time as well, only that it tells you how late it is so you can move the object more, so it can make the object look like it skipped frame. Must we use a Timer? And if so, what is the best one to use?

    Read the article

  • Writing a basic shader for large input files

    - by Zoltan Varadi
    I started writing a shader for my iOS app and instead of starting from scratch i used this tutorial here: http://www.raywenderlich.com/3664/opengl-es-2-0-for-iphone-tutorial I wrote an import function, first to import wavefront .obj models. My problem is that with I can't handle larger inputs (with a simple cube it was working). I realized that the indices array is an array of GLubyte values, which is unsigned char, so as a result i cant have more than 256 indexes. I modified it to GLuint, but then only get a blank screen. What else needs to me modified? p.s.: the source can be downloaded from here: http://d1xzuxjlafny7l.cloudfront.net/downloads/HelloOpenGL.zip

    Read the article

  • Using OpenCl to jiggle the Pipe

    - by TOAOGG
    I've got the Idea to use OpenCL to program a simple Renderer. A clear contra is, that this approach won't benefit from the hardware as the functions on the device (I think). Would it be useful to do this in OpenCL..lets say we want to Cull as early as possible so we won't have many per vertex operations. Is it correct, that Culling is done after the Vertex-Shader? For static-vertecies who won't get effected by the shader it could be interesting to cull them before. Another idea would be an deferred renderer. So the main question is: Would it make sense to program a renderer in OpenCL (aside the effort)? The resulting picture would be drawn in OpenGL.

    Read the article

  • TGA loader: reverse y-axis

    - by aVoX
    I've written a TGA image loader in Java which is working perfectly for files created with GIMP as long as they are saved with the option origin set to Top Left (Note: Actually TGA files are meant to be stored upside down - Bottom Left in GIMP). My problem is that I want my image loader to be capable of reading all different kinds of TGA, so my question is: How do I flip the image upside down? Note that I store all image data inside a one-dimensional byte array, because OpenGL (glTexImage2D to be specific) requires it that way. Thanks in advance.

    Read the article

  • What are the semantics of glRotate and glTranslate's parameters?

    - by Zarkopafilis
    I have been trying to play with OpenGL after watching some tutorials and I don't understand how the glTranslatef and glRotatef functions work. I believe a simple picture would help me. I understand that glTranslatef changes the position of the "camera" (but does it change the position in wich the shapes are getting drawn)? However, I don't understand the rotation concept at all. If I do glRotatef(1,0,0,1) it makes my quad spin around. If I just do glRotatef(1,0,0,0) it makes the quad smaller (further away) but if I try to rotate around the X or Y axis, I get a black screen. I don't understand the angle either. Help would be appreciated.

    Read the article

  • Sprite rotation

    - by Kipras
    I'm using OpenGL and people suggest using glRotate for sprite rotation, but I find that strange. My problem with it is that it rotates the whole matrix, which sort of screws up all my collision detection and so on and so forth. Imagine I had a sprite at position (100, 100) and in position (100, 200) is an obstacle and the sprite's facing it. I rotate the sprite away from the obstacle and when move upwards my y axis, even though the projection shows like it's going away from the obstacle, the sprite will intersect it. So I don't see another way of a rotating a sprite and not screwing up all collision detection other than doing mathematical operations on the image itself. Am I right or am I missing something?

    Read the article

  • Vertex data split into separate buffers or one one structure?

    - by kiba2
    Is it better to have all vertex data in one structure like this: class MyVertex { int x,y,z; int u,v; int normalx, normaly, normalz; } Or to have each component (location, normal, texture coordinates) in separate arrays/buffers? To me it always seemed logical to keep the data grouped together in one structure because they'd always be the same for each instance of a shared vertex and that seems to be true for things like character models (ex: the normal should be an average of adjacent normals for smooth lighting). One instance where this doesn't seem to work is other kinds of meshes like say a cube where the texture coordinates for each may be the same but that causes them to be different where the vertices are shared. Does everybody normally keep them separate? Won't this make them less space efficient if there needs to be an instance of texture coordinates and normals for each triangle vertex (They won't be indexed)? Can OpenGL even handle this mixing of indexed (for location) vs non-indexed buffers in the same VBO?

    Read the article

  • Do unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. Now I'm wondering does it have a noticeable affect in FPS if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? Edit Final output will be a w*h cell grid, but for technical issues it's much easier for me to allocate (w+1)*(h+1) vertices. Sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect FPS or not? (Note that mesh is only generated once in each time you play the game)

    Read the article

  • How many threads should an Android game use?

    - by kvance
    At minimum, an OpenGL Android game has a UI thread and a Renderer thread created by GLSurfaceView. Renderer.onDrawFrame() should be doing a minimum of work to get the higest FPS. The physics, AI, etc. don't need to run every frame, so we can put those in another thread. Now we have: Renderer thread - Update animations and draw polys Game thread - Logic & periodic physics, AI, etc. updates UI thread - Android UI interaction only Since you don't ever want to block the UI thread, I run one more thread for the game logic. Maybe that's not necessary though? Is there ever a reason to run game logic in the renderer thread?

    Read the article

  • forward rendering and multiple shadow maps

    - by Irbis
    I have two light sources on my scene. I created two fbo's which store depth textures for these lights. A render loop looks like this: bind fbo1 save depth values for first light unbind fbo1 bind fbo2 save depth values for second light unbind fbo2 enable additive blending bind first depth texture render scene bind second depth texture render scene disable additive blending For one light source the program works fine. For many light sources I use an additive blending to acumulate lighting results but then some objects become transparent (for example when an object which is further away from the camera is drawn before an object which is closer to the camera). How to resolve that problem ? How should I accumulate lighting effects for many light sources (many shadow maps) ? P.S. I use OpenGL/GLSL 3.3+

    Read the article

  • How do you load in a .bmp file to use it as a height map in C++?

    - by user1324894
    I need to create a terrain that can be used in C++ and opengl and I am told that using a height map is the way to go. I have a .bmp file that is a greyscale image that I would like to use. However, I do not know how to load that greyscale image into the project so that it can be used as a height map or how I can apply a texture to it to create terrain nor how to compute the normals that I'm told will be required. How would I go about achieving what I would like to do?

    Read the article

  • does unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. now I'm wondering does it have a noticeable affect in fps if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? edit final output will be a w*h cell grid, but for technical issues it's much more easier for me to allocate (w+1)*(h+1) vertices. sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect fps or not? (note that mesh is only generated once in each time you play the game)

    Read the article

  • What Shading/Rendering techniques are being used in this image?

    - by Rhakiras
    My previous question wasn't clear enough. From a rendering point of view what kind of techniques are used in this image as I would like to apply a similar style (I'm using OpenGL if that matters): http://alexcpeterson.com/ My specific questions are: How is that sun glare made? How does the planet look "cartoon" like? How does the space around the planet look warped/misted? How does the water look that good? I'm a beginner so any information/keywords on each question would be helpful so I can go off and learn more. Thanks

    Read the article

  • is this the correct way to use glTexCoordPointer?

    - by RubyKing
    Hey all Just trying to work out how to use this function glTexCoordPointer. Here is the man pages http://www.opengl.org/sdk/docs/man/xhtml/glTexCoordPointer.xml which states that I must set a pointer to the first element of the array that uses the texture cordinate. Here is my array static const GLfloat GUIVertices[] = { //FIRST QUAD 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, -1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f, 0.94f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.94f, 0.0f, 1.0f, 1.0f, 1.0f, //2ND QUAD // x y z w X Y 1.0f, -1.0f, 0.0f, 1.0f, 1.0f, 0.0f, -1.0f,-1.0f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f,-0.94f, 0.0f,1.0f, 0.0f, 1.0f, 1.0f, -0.94f, 0.0f,1.0f, 1.0f, 1.0, }; But how do I set the pointer correctly? like this glTexCoordPointer(1, GL_FLOAT, 6, reinterpret_cast(29 * sizeof(float)) ); for the fifth element on the 2nd quad first row. any help is thankful

    Read the article

  • Why does DirectX use a left-handed coordinate system?

    - by greyfade
    I considered posting on Stack Overflow, but the question strikes me as being far too subjective since I can't think of a reasonable technical explanation for Microsoft's choice in this matter. But this question has bugged me for so long and the issue keeps coming up in one of my projects, and I have never actually seen an attempt at explaining this: OpenGL uses a right-handed coordinate system, where the +Z part of the world coordinate system extends toward the viewer. DirectX uses a left-handed system where the +Z part of the world coordinate extends into the screen, away from the viewer. I never used the Glide API, so I don't know how it worked, but from what I can gather, it uses a left-handed system as well. Is there a technical reason for this? And if not, is there some conceptual advantage to a particular handedness of a coordinate system? Why would one choose one over the other?

    Read the article

  • How to render a texture partly transparent?

    - by megamoustache
    Good Morning StackOverflow, I'm having a bit of a problem right now as I can't seem to find a way to render part of a texture transparently with openGL. Here is my setting : I have a quad, representing a wall, covered with this texture (converted to PNG for uploading purposes). Obviously, I want the wall to be opaque, except for the panes of glass. There is another plane behind the wall which is supposed to show a landscape. I want to see the landscape from behind the window. Each texture is a TGA with alpha channel. The "landscape" is rendered first, then the wall. I thought it would be sufficient to achieve this effect but apparently it's not the case. The part of the window supposed to be transparent is black and the landscape only appears when I move past the wall. I tried to fiddle with GLBlendFunc() after having enabled it but it doesn't seem to do the trick. Am i forgetting an important step ? Thank you :)

    Read the article

  • Rotate an object given only by its points?

    - by d33tah
    I was recently writing a simple 3D maze FPP game. Once I was done fiddling with planes in OpenGL, I wanted to add support for importing Blender objects. The approach I used was triangulization of the object, then using Three.js to export the points to plaintext and then parsing the result JSON in my app. The example file can be seen here: https://github.com/d33tah/tinyfpp/blob/master/Data/Models/cross.txt The numbers represent x,y,z,u,v of a single vertex, which combined in three make a triangle. Then I rendered such an object triangle-by-triangle and played with it. I could move it back and forth and sideways, but I still have no idea how to rotate it by some axis. Let's say I'd like to rotate all the points by five degrees to the left, how would a code doing it look like?

    Read the article

  • 2D Game Development dynamics in c++ [on hold]

    - by novice
    I am new to developing computer graphic applications in c++ using OpenGl. I want to develop games but I really am facing problems when it comes to understanding concepts like trajectory, collisions, gravity and also the use of various physics engines that are available. when i search the internet I kind of get lost because they aren't for beginners like me. There is some hardcore mathematics, physics and coding involved. I need to pick the concepts that are mostly needed in game dev like trajectory, collision etc. Any good tutorials that can help me out in picking these concepts from the start.

    Read the article

  • 3D primitive rendering library

    - by tomzx
    Hi, I am looking for a library which would easily allow me to render shapes (cubes, spheres, lines, circles, etc.) in 3D3 and OpenGL if possible. I want to be able to rapidly design visual debugging tools and I am not proefficient enough in graphics rendering to do it myself (writing the low level stuff that is). The library would have to be for C/C++. I've already taken a look at the open-source 3d engine, but I feel those are too big for what I really need. Do any of you know if such library exist? If so, links would be appreciated!

    Read the article

  • How to draw a 3D donut chart in OpenGL?

    - by paulbeesley
    I would like to draw a chart in OpenGL similar to the donut graph at the bottom-right of this example. I have experience with drawing 2D charts such as the main chart in the example but what confuses me about the one I want to draw is the correct type of primitive to use when drawing the 3D chart. I have considered using GL_QUAD_STRIP and GL_POLYGON but neither seem quite right for the task. Where should I begin? I will be using JOGL with Java to draw the chart, if that helps at all. Also, I don't necessarily need to extrude certain slices of the chart as shown in the example.

    Read the article

  • Android: 2D. OpenGl or android.graphics?

    - by DroidIn.net
    I'm working with my friend on our first Android game. Basic idea is that every frame the whole surface is redrawn (1 large bitmap) which then sprinkled all over with large number of particles which produces effect of soapy bubbles where there's a pool of about 20 bitmaps which randomly gets picked to produce illusion that all bubbles (between 200 - 300) are all different. The math engine is in C (JNI) and currently all drawing is done using android.graphics package very similar (since that was the example I was using) to Lunar Lander. It works but animation is somewhat jerky and I can feel by temperature of my phone that it is very busy. Will we benefit from switching to OpenGL? And as a bonus question: what would be a good way to optimize the drawing mechanism (Lunar Lander like) we have now?

    Read the article

  • iPhone - Bug using CADisplayLink and UIControls - bad to mix openGL and UIControls?

    - by Adam
    Having had problems using other methods, I've decided to stick with CADisplayLink to run my game loop. The animation is smooth now, but sometimes there's a problem where the buttons and other UI elements can't be used, can't be accessed by touch or changed programmatically. This includes UIButtons and UILabels. Has anyone encountered this before? Is it not a good idea in general to use interface builder and uicontrols on top of an OpenGL view? I've heard they don't play well together but haven't heard the reasons. Thanks!

    Read the article

  • Is there a way to move two squares in OpenGL simultaneously?

    - by thyrgle
    Hi, so I have a function that handles key presses in a game I'm working on in OpenGL. But, the thing is that even though I have made two squares and they both move when the correct key is pressed only one square is moved. Is there a way I can make the two squares move. This is the glutKeyboardFunc function I implimented: void handleKeypress(unsigned char key, int x, int y) { switch (key) { case 27: exit(0); break; case 'w': glutTimerFunc(0.001, moveSquareUp, 0); break; case 'd': glutTimerFunc(0.001, moveSquareRight, 0); break; case 's': glutTimerFunc(0.001, moveSquareDown, 0); break; case 'a': glutTimerFunc(0.001, moveSquareLeft, 0); break; } } If you need any more code just ask.

    Read the article

  • Can 3D OpenGL game written in Python look good and run fast?

    - by praavDa
    I am planning to write an simple 3d(isometric view) game in Java using jMonkeyEngine - nothing to fancy, I just want to learn something about OpenGL and writing efficient algorithms (random map generating ones). When I was planning what to do, I started wondering about switching to Python. I know that Python didn't come into existence to be a tool to write 3d games, but is it possible to write good looking games with this language? I have in mind 3d graphics, nice effects and free CPU time to power to rest of game engine? I had seen good looking java games - and too be honest, I was rather shocked when I saw level of detail achieved in Runescape HD. On the other hand, pygame.org has only 2d games, with some starting 3d projects. Are there any efficient 3d game engines for python? Is pyopengl the only alternative? Good looking games in python aren't popular or possible to achieve? I would be grateful for any information / feedback.

    Read the article

  • OpenGL: How to draw Bezier curve of degree higher then 8?

    - by maciekp
    I am trying to draw high order Bezier Curve using OpenGL evaluators: glMap1f(GL_MAP1_VERTEX_3, 0.0, 1.0, 3, 30, &points[0][0]); glMapGrid1f(30, 0, 1); glEvalMesh1(GL_LINE, 0, 30); or glBegin(GL_LINE_STRIP); for (int i = 0; i <= 30; i++) glEvalCoord1f((GLfloat) i/30.0); glEnd(); When number of points exceeds 8, curve disappears. How to draw higher order Bezier curve using evaluators?

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >