Search Results

Search found 2515 results on 101 pages for 'opengl es2'.

Page 64/101 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • How to find vector for the quaternion from X Y Z rotations

    - by can poyrazoglu
    I am creating a very simple project on OpenGL and I'm stuck with rotations. I am trying to rotate an object indepentdently in all 3 axes: X, Y, and Z. I've had sleepless nights due to the "gimbal lock" problem after rotating about one axis. I've then learned that quaternions would solve my problem. I've researched about quaternions and implementd it, but I havent't been able to convert my rotations to quaternions. For example, if I want to rotate around Z axis 90 degrees, I just create the {0,0,1} vector for my quaternion and rotate it around that axis 90 degrees using the code here: http://iphonedevelopment.blogspot.com/2009/06/opengl-es-from-ground-up-part-7_04.html (the most complicated matrix towards the bottom) That's ok for one vector, but, say, I first want to rotate 90 degrees around Z, then 90 degrees around X (just as an example). What vector do I need to pass in? How do I calculate that vector. I am not good with matrices and trigonometry (I know the basics and the general rules, but I'm just not a whiz) but I need to get this done. There are LOTS of tutorials about quaternions, but I seem to understand none (or they don't answer my question). I just need to learn to construct the vector for rotations around more than one axis combined. UPDATE: I've found this nice page about quaternions and decided to implement them this way: http://www.cprogramming.com/tutorial/3d/quaternions.html Here is my code for quaternion multiplication: void cube::quatmul(float* q1, float* q2, float* resultRef){ float w = q1[0]*q2[0] - q1[1]*q2[1] - q1[2]*q2[2] - q1[3]*q2[3]; float x = q1[0]*q2[1] + q1[1]*q2[0] + q1[2]*q2[3] - q1[3]*q2[2]; float y = q1[0]*q2[2] - q1[1]*q2[3] + q1[2]*q2[0] + q1[3]*q2[1]; float z = q1[0]*q2[3] + q1[1]*q2[2] - q1[2]*q2[1] + q1[3]*q2[0]; resultRef[0] = w; resultRef[1] = x; resultRef[2] = y; resultRef[3] = z; } Here is my code for applying a quaternion to my modelview matrix (I have a tmodelview variable that is my target modelview matrix): void cube::applyquat(){ float& x = quaternion[1]; float& y = quaternion[2]; float& z = quaternion[3]; float& w = quaternion[0]; float magnitude = sqrtf(w * w + x * x + y * y + z * z); if(magnitude == 0){ x = 1; w = y = z = 0; }else if(magnitude != 1){ x /= magnitude; y /= magnitude; z /= magnitude; w /= magnitude; } tmodelview[0] = 1 - (2 * y * y) - (2 * z * z); tmodelview[1] = 2 * x * y + 2 * w * z; tmodelview[2] = 2 * x * z - 2 * w * y; tmodelview[3] = 0; tmodelview[4] = 2 * x * y - 2 * w * z; tmodelview[5] = 1 - (2 * x * x) - (2 * z * z); tmodelview[6] = 2 * y * z - 2 * w * x; tmodelview[7] = 0; tmodelview[8] = 2 * x * z + 2 * w * y; tmodelview[9] = 2 * y * z + 2 * w * x; tmodelview[10] = 1 - (2 * x * x) - (2 * y * y); tmodelview[11] = 0; glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadMatrixf(tmodelview); glMultMatrixf(modelview); glGetFloatv(GL_MODELVIEW_MATRIX, tmodelview); glPopMatrix(); } And my code for rotation (that I call externally), where quaternion is a class variable of the cube: void cube::rotatex(int angle){ float quat[4]; float ang = angle * PI / 180.0; quat[0] = cosf(ang / 2); quat[1] = sinf(ang/2); quat[2] = 0; quat[3] = 0; quatmul(quat, quaternion, quaternion); applyquat(); } void cube::rotatey(int angle){ float quat[4]; float ang = angle * PI / 180.0; quat[0] = cosf(ang / 2); quat[1] = 0; quat[2] = sinf(ang/2); quat[3] = 0; quatmul(quat, quaternion, quaternion); applyquat(); } void cube::rotatez(int angle){ float quat[4]; float ang = angle * PI / 180.0; quat[0] = cosf(ang / 2); quat[1] = 0; quat[2] = 0; quat[3] = sinf(ang/2); quatmul(quat, quaternion, quaternion); applyquat(); } I call, say rotatex, for 10-11 times for rotating only 1 degree, but my cube gets rotated almost 90 degrees after 10-11 times of 1 degree, which doesn't make sense. Also, after calling rotation functions in different axes, My cube gets skewed, gets 2 dimensional, and disappears (a column in modelview matrix becomes all zeros) irreversibly, which obviously shouldn't be happening with a correct implementation of the quaternions.

    Read the article

  • gluLookAt doesn't work

    - by Tyzak
    hi, i'm programming with opengl and i want to change the camera view: ... void RenderScene() //Zeichenfunktion { glClearColor( 1.0, 0.5, 0.0, 0 ); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); glLoadIdentity (); //1.Form: glBegin( GL_POLYGON ); //polygone glColor3f( 1.0f, 0.0f, 0.0f ); //rot glVertex3f( -0.5, -0.5, -0.5 ); //unten links 3 =3 koords, f=float glColor3f( 0.0f, 0.0f, 1.0f ); //blau glVertex3f( 0.5, -0.5, -0.5 ); //unten rechts glVertex3f( 0.5, 0.5, -0.5 );//oben rechts glVertex3f( -0.5, 0.5, -0.5 );//oben links glEnd(); Wuerfel(0.7); //creates cube with length 0.7 gluLookAt ( 0., 0.3, 1.0, 0., 0.7, 0., 0., 1., 0.); glFlush(); //Buffer leeren } ... when i change the parameter of gluLookAt, nothing happens, what do i wrong? thanks

    Read the article

  • How to use Mesa3D on Mac OS X and Windows

    - by gutsblow
    Hello all, I need to use Mesa3D for a cross platform application(windows and Mac only) which uses only offline software rendering. The reason I wanted to use Mesa3D is because it has the same Drawing calls as OpenGL and they are really easy. Now I know that Apple itself has a software implementation (which I heard is flaky), but I prefer using Mesa so that it's a lot easier for me to maintain the code on both platforms. On windows I managed to compile three DLL's from the Mesa3d source, but don't know what to do with them. On Mac OS X I am completely clueless. I would highly appreciate your help. Thank you once again very much!

    Read the article

  • Operating system for visualization app in 6 monitors

    - by Federico
    Hi. I have to plan the development for an application with these major requirements: Show different graphical data and animations in 6 monitors, in fullscreen mode. The hardware to be used is a PC with 3 NVIDIA GeForce 9800 GX2 cards. I have some expertise working with OpenGL, but never with more than one monitor. I have the (some limited) freedom to choose an operating system for the application. My options are: Windows XP, Windows Vista, Windows 7, Ubuntu 8.04/10.04. I would like to know, if you have some expertise or knowledge in the multi-monitor application development field, what is the recommended operating system for this kind of application? And, do I need any software other than the operating system and the NVIDIA drivers to be able to use the 6 monitors in fullscreen, showing different things in each one of them? Any comment/answer will be really appreciated. Thanks in advance! Federico

    Read the article

  • How can I use GLUT with CUDA on MACOSX?

    - by omegatai
    Hi, I'm having problems compiling a CUDA program that uses GLUT on MacOsX. Here is the command line I use to compile the source: nvcc main.c -o main -Xlinker "-L/System/Library/Frameworks/OpenGL.framework/Libraries -lGL -lGLU" "-L/System/Library/Frameworks/GLUT.framework" And here is the errors I get: Undefined symbols: "_glutInitWindowSize", referenced from: _main in tmpxft_00001612_00000000-1_main.o "_glutInitWindowPosition", referenced from: _main in tmpxft_00001612_00000000-1_main.o "_glutDisplayFunc", referenced from: _main in tmpxft_00001612_00000000-1_main.o "_glutInitDisplayMode", referenced from: _main in tmpxft_00001612_00000000-1_main.o "_glutCreateWindow", referenced from: _main in tmpxft_00001612_00000000-1_main.o "_glutMainLoop", referenced from: _main in tmpxft_00001612_00000000-1_main.o "_glutInit", referenced from: _main in tmpxft_00001612_00000000-1_main.o ld: symbol(s) not found collect2: ld returned 1 exit status I am aware that I haven't specified any lib for GLUT but I just can't find it! Does anybody know where it is? By the way, there doesn't seem to be a way to use the GLUT.framework when compiling with nvcc. Thanks a lot, omegatai

    Read the article

  • Intersection of line with cube and knowing the point of intersection.

    - by Raj
    Hello everyone, description 1.lines are originating from origin(0,0,0). 2.lines are at some random angle to the Normal of Top face of teh cube. 3.if the lines are intersecting cube , calculate the intersection point. 4.mainly i wan to know how much distance ,line traveled inside the cube. I dont know exactly which approach should i take , i will be pleased and thankful if someone could guied me to the right direction, to use OpenGL, DirectX or some other library, for C# . some example or sample will be appriciated.

    Read the article

  • cheapest way to draw a fullscreen quad

    - by Soubok
    I wondering if there is a faster way to draw a full-screen quad in OpenGL: NewList(); PushMatrix(); LoadIdentity(); MatrixMode(PROJECTION); PushMatrix(); LoadIdentity(); Begin(QUADS); Vertex(-1,-1,0); Vertex(1,-1,0); Vertex(1,1,0); Vertex(-1,1,0); End(); PopMatrix(); MatrixMode(MODELVIEW); PopMatrix(); EndList();

    Read the article

  • How to make a 3D UI for an application ?

    - by wacky_coder
    I'm building an application and I'd like its User Interface to be 3D, most probably a cylinder. The user would see the cylinder[Horizontally laid] and the cylinder's curved surface would have the buttons and any other controls that need to be placed. The cylinder needs to rotate and later, I'd like to add some other effects to the cylinder too. Someone told me that such a UI can be modelled using Maya or Blender and exported to openGL and then I could use C/C++ (with Qt) to carry out the actions. How can this be done?? Is there any other way to build the UI and do all the other things that I need ?? I really need some help because I have the UI in mind, but no idea on how to implement it. Thanks

    Read the article

  • Open GL ES 2.0 co-ordinate systems

    - by Chris
    Hi, I want to use Open GL ES 2.0 for a new game, but I have two questions. Q: The first is how do I set up perspective views in Open GL ES 2.0 - do I need to include Open GL ES 1.0 and use glOrtho, or is there a new way? Q: I want to use the 4th quadrant of a Cartesian co-ordinate system for my game and not use -0.5 to +0.5 for values on screen, how once the first question is answered can I achieve this? Other resources: http://iphonedevelopment.blogspot.com/2009/04/opengl-es-from-ground-up-part-3.html Thanks Chris

    Read the article

  • iPhone OpenGLES textures - colour banding

    - by chicknstu
    I've got a problem with openGL on iPhone which I'm sure must have a simple solution! When I load a texture and display it, I get a lot of what I believe is called 'Colour Banding', whereby the colours, particularly on gradients, seem to get automatically 'optimized'. Just to demonstrate that this wasn't anything wrong with my own code, I downloaded the iPhone 'Crashlanding' app and replaced the background image, and as you can see in the image below (Taken from the simulator), the exact same thing happens. The image on the left is the original PNG, and on the right is it in the game. It's almost as if it's palette is being downsized to a 256 colour one. Screenshot I'm sure this is related to the format I'm saving the image as, although it doesn't just happen with PNG's, it seems to happen no matter what image format I chose. Doing my head in! If you want to recreate this, simply download the crash landing app, and replace the background. Thanks so much in advance for any help.

    Read the article

  • Accessing functions in a parent controller

    - by meridimus
    I have made a ViewController in XCode for an iPhone project I'm working on, but I have a question about nested ViewControllers and what the best way to access a parents ViewController functions? Essentially, at the moment I have a SwitchViewController with MenuViewController (nested) and GameViewController (nested, which renders OpenGL ES). At the moment, I have animated view switching controlled in the SwitchViewController which works. But I want to call it after a player has selected the level from the MenuViewController and run the appropriate level in GameViewController. Not rocket science, I know. What's the best way to call parent functions?

    Read the article

  • iPhone and Vertex Buffer Objects

    - by dancer
    I've just started playing around with opengl es on the iphone the past couple of weeks and i'm looking at refactoring some of my code to use Vertex Buffer Objects(VBO). Before I do though I would like to make sure it'll be worth it. The problem is that afaik the only reason you create VBO's is to shift a chunk of data onto the graphics card so that it doesn't need to be retrieved from system ram when it's used. The iPhone however does not have any dedicated ram that I'm aware of so i'm struggling to see why I would benefit at all from using VBO's. I have seen talk around the internet with conflicting opinions and apple certainly want dev's to use it so there's probably still a reason to use them but just wanted to see if anyone on SO had an opinion to add.

    Read the article

  • glReadPixels and save to image

    - by Julius Petraška
    I have app, where user drags and drops image, and it is being redrawn with OpenGL for some aviable processing. Everything works. And when user wants to save his image it works like that: glReadPixels -> NSBitmapImageRep -> NSData -> Write to file This works too. Almost. With some images it is not working as it should work. For example: .png when I open and save this image: I get: And if I open and save this image: I get: .jpg If I open and save: I get: And when I open and save: I get: So sometimes images saves badly. Why is it happening?

    Read the article

  • Gradients and memory

    - by user146780
    I'm creating a drawing application with OpenGL. I'v created an algorithm that generates gradient textures. I then map these to my polygons and this works quite well. What I realized is how much memory this requires. Creating 1000 gradients takes about 800MB and that's way too much. Is there an alternative to textures, or a way to compress them, or another way to map gradients to polygons that doesn't use up as much memory? Thanks My polygons are concave, I use GLUTesselator, and they are multicolored and point to point

    Read the article

  • Suggest an Alternative for glTranslate() load on CPU.

    - by Nagaraj
    I have been working on a project of OpenGL. Here I just display a boat moving along with some option's for view change.. Its a 2D program. The thing is I have used many glTranslate functions for moving the boat in the code. It works properly in Windows(DEV-CPP) but when executed in Fedora it has a very very very slow movement for boat. When checked for the CPU LOAD it was huge. So any thing which i can try to move the boat faster? Please help :)

    Read the article

  • Enabling depth testing when using CAOpenGLLayer

    - by Andrew
    If one is using a subclass of NSOpenGLView then one enables depth testing by selecting a 16/24/32 bit buffer from the attributes menu in Xcode, and then adding glEnable(GL_DEPTH_TEST); glClear(GL_DEPTH_BUFFER_BIT); to the drawRect method. However, in the application I'm creating I'm rendering OpenGL content via the drawInCGLContext method of a CAOpenGLLayer which is contained within a subclass of NSView. This means that it is no longer possible to create a depth buffer via the inspector. Does anyone know how I can achieve this in such a situation?

    Read the article

  • Suggestions for a C++ IDE?

    - by AedonEtLIRA
    I know this is is a shifty question and really isn't easy to answer, but bare with me. For a while now I have been using Eclipse and doing Java programming. Now that I reach a point where I'm comfortable in Java, I wish to move on back into C++ and actually make something more than a single class that prints to terminal; and work in OpenGL :). So I wonder if anybody has a recomendation of IDE's that resemble or are as fluid as Eclipse? I am aware that Eclipse has a C++ plugin, but it really doesn't feel user friendly (at least to a pampered java programmer!). I have (I think I still have it?) a copy of Visual Studio 2005, but want to see if anyone has any better ideas. Thanks ~Aedon

    Read the article

  • "Single NSMutableArray" vs. "Multiple C-arrays" --Which is more Efficient/Practical?

    - by RexOnRoids
    Situation: I have a DAY structure. The DAY structure has three variables or attributes: a Date (NSString*), a Temperature (float), and a Rainfall (float). Problem: I will be iterating through an array of about 5000 DAY structures and graphing a portion of these onto the screen using OpenGL. Question: As far as drawing performance, which is better? I could simply create an NSMutableArray of DAY structures (NSObjects) and iterate on the array on each draw call -- which I think would be hard on the CPU. Or, I could instead manually manage three different C-Arrays -- One for the Date String (2-Dimensional), One for the temperature (1-Dimensional) and One for the Rainfall (1-Dimensional). I could keep track of the current Day by referencing the current index of the iterated C-Arrays.

    Read the article

  • Animate 3D model programmatically-where to start?

    - by amile
    I am having a task to create a 3D model face that can talk like humans.without having any knowledge about 3D modeling. I have no clue where to start.I have searched a lot and find some of these things.OpenGL,WebGL,XNA and other similar tool can be helpful. please guide me where to start a step by step approach and which platform is better as i have programming background in JAVA. here is an idea what I need to do https://docs.google.com/viewer?a=v&pid=gmail&attid=0.2&thid=13f65486a46a1f67&mt=application/pdf&url=https://mail.google.com/mail/?ui%3D2%26ik%3D49f1f393c6%26view%3Datt%26th%3D13f65486a46a1f67%26attid%3D0.2%26disp%3Dsafe%26realattid%3Df_hi6ylzbv2%26zw&sig=AHIEtbQc8KQNHdprmEnL4UXyD3ox8vlKKQ

    Read the article

  • Trying to draw textured triangles on device fails, but the emulator works. Why?

    - by Dinedal
    I have a series of OpenGL-ES calls that properly render a triangle and texture it with alpha blending on the emulator (2.0.1). When I fire up the same code on an actual device (Droid 2.0.1), all I get are white squares. This suggests to me that the textures aren't loading, but I can't figure out why they aren't loading. All of my textures are 32-bit PNGs with alpha channels, under res/raw so they aren't optimized per the sdk docs. Here's how I am loading my textures: private void loadGLTexture(GL10 gl, Context context, int reasource_id, int texture_id) { //Get the texture from the Android resource directory Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), reasource_id, sBitmapOptions); //Generate one texture pointer... gl.glGenTextures(1, textures, texture_id); //...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[texture_id]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); //Clean up bitmap.recycle(); } Here's how I am rendering the texture: //Clear gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); //Enable vertex buffer gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); //Push transformation matrix gl.glPushMatrix(); //Transformation matrices gl.glTranslatef(x, y, 0.0f); gl.glScalef(scalefactor, scalefactor, 0.0f); gl.glColor4f(1.0f,1.0f,1.0f,1.0f); //Bind the texture gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[textureid]); //Draw the vertices as triangles gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer); //Pop the matrix back to where we left it gl.glPopMatrix(); //Disable the client state before leaving gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); And here are the options I have enabled: gl.glShadeModel(GL10.GL_SMOOTH); //Enable Smooth Shading gl.glEnable(GL10.GL_DEPTH_TEST); //Enables Depth Testing gl.glDepthFunc(GL10.GL_LEQUAL); //The Type Of Depth Testing To Do gl.glEnable(GL10.GL_TEXTURE_2D); gl.glEnable(GL10.GL_BLEND); gl.glBlendFunc(GL10.GL_SRC_ALPHA,GL10.GL_ONE_MINUS_SRC_ALPHA); Edit: I just tried supplying a BitmapOptions to the BitmapFactory.decodeResource() call, but this doesn't seem to fix the issue, despite manually setting the same preferredconfig, density, and targetdensity. Edit2: As requested, here is a screenshot of the emulator working. The underlaying triangles are shown with a circle texture rendered onto it, the transparency is working because you can see the black background. Here is a shot of what the droid does with the exact same code on it: Edit3: Here are my BitmapOptions, updated the call above with how I am now calling the BitmapFactory, still the same results as below: sBitmapOptions.inPreferredConfig = Bitmap.Config.RGB_565; sBitmapOptions.inDensity = 160; sBitmapOptions.inTargetDensity = 160; sBitmapOptions.inScreenDensity = 160; sBitmapOptions.inDither = false; sBitmapOptions.inSampleSize = 1; sBitmapOptions.inScaled = false; Here are my vertices, texture coords, and indices: /** The initial vertex definition */ private static final float vertices[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, -1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f }; /** The initial texture coordinates (u, v) */ private static final float texture[] = { //Mapping coordinates for the vertices 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; /** The initial indices definition */ private static final byte indices[] = { //Faces definition 0,1,3, 0,3,2 }; Is there anyway to dump the contents of the texture once it's been loaded into OpenGL ES? Maybe I can compare the emulator's loaded texture with the actual device's loaded texture? I did try with a different texture (the default android icon) and again, it works fine for the emulator but fails to render on the actual phone. Edit4: Tried switching around when I do texture loading. No luck. Tried using a constant offset of 0 to glGenTextures, no change. Is there something that I'm using that the emulator supports that the actual phone does not? Edit5: Per Ryan below, I resized my texture from 200x200 to 256x256, and the issue was NOT resolved. Edit: As requested, added the calls to glVertexPointer and glTexCoordPointer above. Also, here is the initialization of vertexBuffer, textureBuffer, and indexBuffer: ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuf.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(texture.length * 4); byteBuf.order(ByteOrder.nativeOrder()); textureBuffer = byteBuf.asFloatBuffer(); textureBuffer.put(texture); textureBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); loadGLTextures(gl, this.context);

    Read the article

  • Android Graphics Memory Limits

    - by Gordon
    I am creating an android game using opengl and a cocos2d port (http://code.google.com/p/cocos2d-android-1). I am targeting a wide range of devices and want to ensure that it performs well. I only test on a nexus one and am hoping to get some input from people with experience on slower devices. Currently the game uses two 1024x1024 textures as well as two 256x256 textures. Is this within the limits of most devices? Anyone have any rule of thumb or experience with graphics memory limits in these cases? If gfx memory is exceeded does it page to normal memory?

    Read the article

  • Getting object coordinates from camera

    - by user566757
    I've implemented a camera in Java using a position vector and three direction vectors so I can use gluLookAt(); moving around in `ghost mode' works fine enough, but I want to add collision detection. I can't seem to figure out how to transform my position vector to coordinates in which OpenGL draws my objects. A rough sketch of my drawing loop is this: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); camera.setView(); drawer.drawTheScene(); I'm at a loss of how to proceed; looking at the ModelView matrix between calls and my position vector, I haven't found any kind of correlation.

    Read the article

  • Objective-C: how to allocate array of GLuint

    - by sashaeve
    I have an array of GLuint with fixed size: GLuint textures[10]; Now I need to set a size of array dynamically. I wrote something like this: *.h: GLuint *textures; *.m: textures = malloc(N * sizeof(GLuint)); where N - needed size. Then it used like this: glGenTextures(N, &textures[0]); // load texture from image -(GLuint)getTexture:(int)index{ return textures[index]; } I used the answer from here, but program fell in runtime. How to fix this? Program is written on Objective-C and uses OpenGL ES.

    Read the article

  • lwjgl isKeyDown canceling out other keys

    - by AKrush95
    While trying to create a simple game where a square is manipulated via the keyboard keys, I have come across a small, rather irritating problem. I would like it to work so that when the opposite directional key is pressed, the character will stop; the character may move the other two directions while stopped in this situation. This works perfectly with LEFT and RIGHT held down; the player may move UP or DOWN. If UP and DOWN are held down, however, the player will not move, nor will Java recognize that the LEFT or RIGHT keys were pressed. import java.util.ArrayList; import java.util.Random; import org.lwjgl.*; import org.lwjgl.input.Keyboard; import org.lwjgl.opengl.*; import static org.lwjgl.opengl.GL11.*; public class Main { private Man p; private ArrayList<Integer> keysDown, keysUp; public Main() { try { Display.setDisplayMode(new DisplayMode(640, 480)); Display.setTitle("LWJGLHelloWorld"); Display.create(); } catch (LWJGLException e) { e.printStackTrace(); } p = new Man(0, 0); keysDown = new ArrayList<>(); keysUp = new ArrayList<>(); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, 640, 480, 0, 1, -1); glMatrixMode(GL_MODELVIEW); while (!Display.isCloseRequested()) { glClear(GL_COLOR_BUFFER_BIT); checkKeys(); p.draw(); Display.update(); Display.sync(60); } Display.destroy(); } public void checkKeys() { ArrayList<Integer> keys = new ArrayList<>(); keys.add(Keyboard.KEY_A); keys.add(Keyboard.KEY_D); keys.add(Keyboard.KEY_W); keys.add(Keyboard.KEY_S); for (int key : keys) { if (Keyboard.isKeyDown(key)) keysDown.add(key); else keysUp.add(key); } keysDown.removeAll(keysUp); keysUp = new ArrayList<>(); int speed = 4; int dx = 0; int dy = 0; if (keysDown.contains(keys.get(2))) { System.out.println("keyUP"); dy -= speed; } if (keysDown.contains(keys.get(3))) { System.out.println("keyDOWN"); dy += speed; } if (keysDown.contains(keys.get(0))) { System.out.println("keyLEFT"); dx -= speed; } if (keysDown.contains(keys.get(1))) { System.out.println("keyRIGHT"); dx += speed; } //if (keysDown.contains(keys.get(0)) && keysDown.contains(keys.get(1))) dx = 0; //if (keysDown.contains(keys.get(2)) && keysDown.contains(keys.get(3))) dy = 0; p.update(dx, dy); } public static void main(String[] args) { new Main(); } class Man { public int x, y, w, h; public float cR, cG, cB; public Man(int x, int y) { this.x = x; this.y = y; w = 50; h = 50; Random rand = new Random(); cR = rand.nextFloat(); cG = rand.nextFloat(); cB = rand.nextFloat(); } public void draw() { glColor3f(cR, cG, cB); glRecti(x, y, x+w, y+h); } public void update(int dx, int dy) { x += dx; y += dy; } } } That is the code that I am working with. In addition, I am unsure how to compile an executable jar that is using the lwjgl library in addition to slick-util.

    Read the article

  • C - add elements to struct by define

    - by CodeStepper
    I have a problem. I'm trying to add struct elements by previously defined constant. This is sample code (OpenGL+WinAPI) #define ENGINE_STRUCT \ HGLRC RenderingContext;\ HDC DeviceContext; And then: typedef struct SWINDOW { ENGINE_STRUCT HWND Handle; HINSTANCE Instance; CHAR* ClassName; BOOL Fullscreen; BOOL Active; MSG Message; } WINDOW; Is this possible? Thanks in advance.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >