Search Results

Search found 3627 results on 146 pages for 'opengl es 2 0'.

Page 73/146 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • JOGL: cannot run java on compiled class file

    - by John Goche
    I am running ubuntu Linux 11.10. I have followed the instructions on the site http://jogamp.org/jogl/www/ and downloaded everything with git to $HOME/jogamp and built everything with ant (although I would have prefered the files installed somewhere under /usr/local). I am trying to run a simple application with JOGL found on the Wikipedia site: http://en.wikipedia.org/wiki/Java_OpenGL In order to compile the file I had to add: export CLASSPATH.:$HOME/jogamp/jogl/build/jogl/classes in order to compile the code. When I run java I get the following: $ java JOGLQuad Exception in thread "main" java.lang.NoClassDefFoundError: javax/media/nativewindow /WindowClosingProtocol at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631) at java.lang.ClassLoader.defineClass(ClassLoader.java:615) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) at java.net.URLClassLoader.defineClass(URLClassLoader.java:283) at java.net.URLClassLoader.access$000(URLClassLoader.java:58) at java.net.URLClassLoader$1.run(URLClassLoader.java:197) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Caused by: java.lang.ClassNotFoundException: javax.media.nativewindow.WindowClosingProtocol at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) ... 12 more Could not find the main class: JOGLQuad. Program will exit. How do I fix this and where can I find more JOGL code samples that I can compile and run? Thanks, John Goche

    Read the article

  • Texture coordintes for a polygon and a square texture

    - by user146780
    basically I have a texture. I also have a lets say octagon (or any polygon). I find that octagon's bounding box. Let's say my texture is the size of the octagon's bounding box. How could I figure out the texture coordinates so that the texture maps to it. To clarify, lets say you had a square of tin foil and cut the octagon out you'd be left with a tin foil textured polygon.I'm just not sure how to figure it out for an arbitrary polygon. Thanks

    Read the article

  • Adding a decal using multitexturing on an iPhone

    - by Axis
    I'm trying to overlay one image on top of another onto a simple quad. I set my bottom image as texture unit 0, and then my top image (which has a variable alpha) as texture unit 1. Unit 2 has mode GL_DECAL, which means the bottom texture should show up when the alpha is 0, and the top texture should show when the alpha is 1. But, only the top texture shows up and the bottom one doesn't appear at all. It's just white where the bottom texture should show through. glGetError() doesn't report any problems. Any help is appreciated. Thanks! glVertexPointer(3, GL_FLOAT, 0, boxVertices); glEnableClientState(GL_VERTEX_ARRAY); glClientActiveTexture(GL_TEXTURE0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glTexCoordPointer(2, GL_FLOAT, 0, boxTextureCoords); glClientActiveTexture(GL_TEXTURE1); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glTexCoordPointer(2, GL_FLOAT, 0, boxTextureCoords); glClientActiveTexture(GL_TEXTURE0); glEnable(GL_TEXTURE_2D); glClientActiveTexture(GL_TEXTURE1); glEnable(GL_TEXTURE_2D); glClientActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, one.texture); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glClientActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, two.texture); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glDrawArrays(GL_TRIANGLE_FAN, 0, 4);

    Read the article

  • How do I make lines scale when using GLOrtho?

    - by Mason Wheeler
    I'm using GLOrtho to set up a 2D view that I can render textures onto. It works really well, up until I try to zoom in on the image. If I pass half the width and half the height of the viewport to GLOrtho, I end up with all my textures displayed twice as big as normal, which is exactly what I expect. But then I try to draw a box around part of the image and it all falls apart. I call glBegin(GL_LINE_LOOP), place the four vertices, and call glEnd, and I expect to see the same thing I would see if I drew it at normal zoom level, doubled. Instead, I get lines that are all the right length, but they all come out one pixel wide, instead of two, and it looks really bad. What am I missing?

    Read the article

  • Scaling mouse coordinates for GlScale?

    - by user146780
    Right now I have a GL context that is set up with a top left coordinate system. I made my mouse so that its 0,0 is the top left of the Open GL frame. If I apply GlScalef(2,2,2) for example to my scene, how would I have to change my mouse so that drawing stil maps out correctly. Thanks

    Read the article

  • move an object towrds touch

    - by Viral
    hi friends, I want to move an object over the screen where ever the touch moves, i.e. If I touch and drag my finger then the object should get moved, if any one know about this how to perform, please let me know. Thank you in advance, vira.

    Read the article

  • Calculate lookat vector from position and Euler angles

    - by Jaap
    I've implemented an FPS style camera, with the camera consisting of a position vector, and Euler angles pitch and yaw (x and y rotations). After setting up the projection matrix, I then translate to camera coordinates by rotating, then translating to the inverse of the camera position: // Load projection matrix glMatrixMode(GL_PROJECTION); glLoadIdentity(); // Set perspective gluPerspective(m_fFOV, m_fWidth/m_fHeight, m_fNear, m_fFar); // Load modelview matrix glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // Position camera glRotatef(m_fRotateX, 1.0, 0.0, 0.0); glRotatef(m_fRotateY, 0.0, 1.0, 0.0); glTranslatef(-m_vPosition.x, -m_vPosition.y, -m_vPosition.z); Now I've got a few viewports set up, each with its own camera, and from every camera I render the position of the other cameras (as a simple box). I'd like to also draw the view vector for these cameras, except I haven't a clue how to calculate the lookat vector from the position and Euler angles. I've tried to multiply the original camera vector (0, 0, -1) by a matrix representing the camera rotations then adding the camera position to the transformed vector, but that doesn't work at all (most probably because I'm way off base): vector v1(0, 0, -1); matrix m1 = matrix::IDENTITY; m1.rotate(m_fRotateX, 0, 0); m1.rotate(0, m_fRotateY, 0); vector v2 = v1 * m1; v2 = v2 + m_vPosition; // add camera position vector glBegin(GL_LINES); glVertex3fv(m_vPosition); glVertex3fv(v2); glEnd(); What I'd like is to draw a line segment from the camera towards the lookat direction. I've looked all over the place for examples of this, but can't seem to find anything. Thanks a lot!

    Read the article

  • When should a uniform be used in shader programming?

    - by Phineas
    In a vertex shader, I calculate a vector using only uniforms. Therefore, the outcome of this calculation is the same for all instantiations of the vertex shader. Should I just do this calculation on the CPU and upload it as a uniform? What if I have ten such calculations? If I upload a lot of uniforms in this way, does CPU-GPU communication ever get so slow that recomputing such values in the vertex shader is actually faster?

    Read the article

  • flood fill algorithm

    - by user335593
    i want to implement the flood fill algorthm...so that when i get the x and y co-od of a point...it should start flooding from that point and fill till it finds a boundary but it is not filling the entire region...say a pentagon this is the code i am using void setpixel(struct fill fillcolor,int x,int y) { glColor3f(fillcolor.r,fillcolor.g,fillcolor.b); glBegin(GL_POINTS); glVertex2i(x,y); glEnd(); glFlush(); } struct fill getpixcol(int x,int y) { struct fill gotpixel; glReadPixels(x,y,1,1,GL_RGB,GL_UNSIGNED_BYTE,pick_col); gotpixel.r =(float) pick_col[0]/255.0; gotpixel.g =(float) pick_col[1]/255.0; gotpixel.b =(float) pick_col[2]/255.0; return(gotpixel); } void floodFill(int x, int y,struct fill fillcolor,struct fill boundarycolor) { struct fill tmp; // if ((x < 0) || (x >= 500)) return; // if ((y < 0) || (y >= 500)) return; tmp=getpixcol(x,y); while (tmp.r!=boundarycolor.r && tmp.g!=boundarycolor.g && tmp.b!=boundarycolor.b) { setpixel(fillcolor,x,y); setpixel(fillcolor,x+1,y); setpixel(fillcolor,x,y+1); setpixel(fillcolor,x,y-1); setpixel(fillcolor,x-1,y); floodFill(x-1,y+1,fillcolor,boundarycolor); floodFill(x-1,y,fillcolor,boundarycolor); floodFill(x-1,y-1,fillcolor,boundarycolor); floodFill(x,y+1,fillcolor,boundarycolor); floodFill(x,y-1,fillcolor,boundarycolor); floodFill(x+1,y+1,fillcolor,boundarycolor); floodFill(x+1,y,fillcolor,boundarycolor); floodFill(x+1,y-1,fillcolor,boundarycolor); } }

    Read the article

  • Which parts of Graphics Pipelines are done using CPU & GPU?

    - by afriza
    Which parts of pipelines are done using CPU and which are done using GPU? Reading Wikipedia on Graphics Pipeline, maybe my question does not precisely represent what I am asking. Referring to this question, which "steps" are done in CPU and which are done in GPU? Edit: My question is more into which parts of logical high level steps needed to display terrain+3D models are using CPU/GPU instead of which functions.

    Read the article

  • How do faces in .obj work?

    - by Adl
    Hi When parsing an .obj-file, with vertices and vertex-faces, it is easy to pass the vertices to the shader and the use glDrawElements using the vertex-faces. When parsing an .obj-file, with vertices and texture-coordinates, another type of face occur: texture-coordinate faces. When displaying textures, apart from loading images, binding them and passing texture coordinates into the parser, how to use the texture-coordinate faces? They differ from the vertex-faces and I suppose that the texture-coordinate faces have a purpose when displaying textures? Regards Niclas

    Read the article

  • Persistence of texture parameters

    - by fen
    I use glBindTexture() to bind a previously created texture. After the glBindTexture() call I use glTexParameteri() to set MIN and MAG filter. No problem so far. Are those parameters I set using glTexParameteri() bound to the texture itself or are they lost if I bind another texture. Do i have to set them again? glGenTexture(1, &tex1); glGenTexture(1, &tex2); /* bind tex1 and set params */ glBindtexture(GL_TEXTURE_RECTANGLE_ARB, tex1); glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, ...); glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_LINEAR); /* do something */ /* bind tex2 and set params */ glBindtexture(GL_TEXTURE_RECTANGLE_ARB, tex2); glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, ...); glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_LINEAR); /* do something */ /* bind tex1 again */ glBindtexture(GL_TEXTURE_RECTANGLE_ARB, tex1); /* do i have to set the parameters from above again or are they stored with tex1? */

    Read the article

  • Draw Pixel using OpenGLEs for Android

    - by Shubh
    How can I draw a pixel(2D view)for Android using OpenGlEs. In Simple when we use draw(Canvas canvas){...} to draw, so using it we draw canvas.drawPoint(i, j, paint); ..but in OpenGlEs still I haven't got any function like it. Please reply Thank in Advance

    Read the article

  • glDrawElements allocating memory and not releasing it

    - by Joshua Weinberg
    Using OpenGLES 1.1 on the iPhone 3G (device, not simulator), I do normal drawing fun. But at points during the run of the application I get giant memory spikes, after a lot of digging with instruments I have found that it is glDrawElements that is grabbing the memory. The buffer being allocated is 4 meg, which to me means its loading a texture into RAM, which I guess could be valid, but its never releasing this buffer, and is allocating multiple of them. How do I make sure that these buffers that GL is creating get destroyed, instead of just hanging around?

    Read the article

  • Head Rotation in Opposite Direction with GLM and Oculus Rift SDK

    - by user3434662
    I am 90% there in getting orientation to work. I am just trying to resolve one last bit and I hope someone can point me to any easy errors I am making but not seeing. My code works except when a person looks left the camera actually rotates right. Vice versa with looking right, camera rotates left. Any idea what I am doing wrong here? I retrieve the orientation from the Oculus Rift like so: OVR::Quatf OculusRiftOrientation = PredictedPose.Orientation; glm::vec3 CurrentEulerAngles; glm::quat CurrentOrientation; OculusRiftOrientation.GetEulerAngles<OVR::Axis_X, OVR::Axis_Y, OVR::Axis_Z, OVR::Rotate_CW, OVR::Handed_R> (&CurrentEulerAngles.x, &CurrentEulerAngles.y, &CurrentEulerAngles.z); CurrentOrientation = glm::quat(CurrentEulerAngles); And here is how I calculate the LookAt: /* DirectionOfWhereCameraIsFacing is calculated from mouse and standing position so that you are not constantly rotating after you move your head. */ glm::vec3 DirectionOfWhereCameraIsFacing; glm::vec3 RiftDirectionOfWhereCameraIsFacing; glm::vec3 RiftCenterOfWhatIsBeingLookedAt; glm::vec3 PositionOfEyesOfPerson; glm::vec3 CenterOfWhatIsBeingLookedAt; glm::vec3 CameraPositionDelta; RiftDirectionOfWhereCameraIsFacing = DirectionOfWhereCameraIsFacing; RiftDirectionOfWhereCameraIsFacing = glm::rotate(CurrentOrientation, DirectionOfWhereCameraIsFacing); PositionOfEyesOfPerson += CameraPositionDelta; CenterOfWhatIsBeingLookedAt = PositionOfEyesOfPerson + DirectionOfWhereCameraIsFacing * 1.0f; RiftCenterOfWhatIsBeingLookedAt = PositionOfEyesOfPerson + RiftDirectionOfWhereCameraIsFacing * 1.0f; RiftView = glm::lookAt(PositionOfEyesOfPerson, RiftCenterOfWhatIsBeingLookedAt, DirectionOfUpForPerson);

    Read the article

  • Setting White balance and Exposure mode for iphone camera + enum default

    - by Spectravideo328
    I am using the back camera of an iphone4 and doing the standard and lengthy process of creating an AVCaptureSession and adding to it an AVCaptureDevice. Before attaching the AvCaptureDeviceInput of that camera to the session, I am testing my understanding of white balance and exposure, so I am trying this: [self.theCaptureDevice lockForConfiguration:nil]; [self.theCaptureDevice setWhiteBalanceMode:AVCaptureWhiteBalanceModeLocked]; [self.theCaptureDevice setExposureMode:AVCaptureExposureModeContinuousAutoExposure]; [self.theCaptureDevice unlockForConfiguration]; 1- Given that the various options for white balance mode are in an enum, I would have thought that the default is always zero since the enum Typedef variable was never assigned a value. I am finding out, if I breakpoint and po the values in the debugger, that the default white balance mode is actually set to 2. Unfortunately, the header files of AVCaptureDevice does not say what the default are for the different camera setting. 2- This might sound silly, but can I assume that once I stop the app, that all settings for whitebalance, exposure mode, will go back to their default. So that if I start another app right after, the camera device is not somehow stuck on those "hardware settings".

    Read the article

  • Apply Quaternion to Camera in libGDX

    - by Alex_Hyzer_Kenoyer
    I am trying to rotate my camera using a Quaternion in libGDX. I have a Quaternion created and being manipulated but I have no idea how to apply it to the camera, everything I've tried hasn't moved the camera at all. Here is how I set up the rotation Quaternion: public void rotateX(float amount) { tempQuat.set(tempVector.set(1.0f, 0.0f, 0.0f), amount * MathHelper.PIOVER180); rotation = rotation.mul(tempQuat); } public void rotateY(float amount) { tempQuat.set(tempVector.set(0.0f, 1.0f, 0.0f), amount * MathHelper.PIOVER180); rotation = tempQuat.mul(rotation); } Here is how I am trying to update the camera (Same update method as the original libGDX version but I added the part about the rotation matrix to the top): public void update(boolean updateFrustum) { float[] matrix = new float[16]; rotation.toMatrix(matrix); Matrix4 m = new Matrix4(); m.set(matrix); camera.view.mul(m); //camera.direction.mul(m).nor(); //camera.up.mul(m).nor(); float aspect = camera.viewportWidth / camera.viewportHeight; camera.projection.setToProjection(Math.abs(camera.near), Math.abs(camera.far), camera.fieldOfView, aspect); camera.view.setToLookAt(camera.position, tempVector.set(camera.position).add(camera.direction), camera.up); camera.combined.set(camera.projection); Matrix4.mul(camera.combined.val, camera.view.val); if (updateFrustum) { camera.invProjectionView.set(camera.combined); Matrix4.inv(camera.invProjectionView.val); camera.frustum.update(camera.invProjectionView); } }

    Read the article

  • How to get pixel information inside a fragment shader?

    - by user697111
    In my fragment shader I can load a texture, then do this: uniform sampler2D tex; void main(void) { vec4 color = texture2D(tex, gl_TexCoord[0].st); gl_FragColor = color; } That sets the current pixel to color value of texture. I can modify these, etc and it works well. But a few questions. How do I tell "which" pixel I am? For example, say I want to set pixel 100,100 (x,y) to red. Everything else to black. How do I do a : "if currentSelf.Position() == (100,100); then color=red; else color=black?" ? I know how to set colors, but how do I get "my" location? Secondly, how do I get values from a neighbor pixel? I tried this: vec4 nextColor = texture2D(tex, gl_TexCoord[1].st); But not clear what it is returning? if I'm pixel 100,100; how do I get the values from 101,100 or 100,101?

    Read the article

  • GL_MULTISAMPLING_ARB is not defined

    - by user146780
    I downloaded this: http://www.videotutorialsrock.com/opengl2.exe . I'm trying to follow a tutorial to setup multisampling but GL_MULTISAMPLING_ARB is not defined, and many necessary wgl functions do not seem to be defined. Wht's wrong exactly? Thanks

    Read the article

  • Determining line orientation using vertex shaders

    - by Brett
    Hi, I want to be able to calculate the direction of a line to eye coordinates and store this value for every pixel on the line using a vertex and fragment shader. My idea was to calculate the direction gradient using atan2(Gy/Gx) after a modelview tranformation for each pair of vertices then quantize this value as a color intensity to pass to a fragment shader. How can I get access to the positions of pairs of vertices to achieve this or is there another method I should use? Thanks

    Read the article

  • gluLookAt vectors and FPS-style camera

    - by Kevin Pamplona
    I am attempting to implemented an FPS-style camera by updating three vectors: EYE, DIR, UP. These vectors are the same that are used by gluLookAt (since gluLookAt is specified by the position of the camera, the direction it is looking at, and an up vector). I have already implemented the left-right and up-down strafing movements, but I'm having a lot of trouble understanding the math behind making the camera look-around while remaining stationary. In this case, the EYE vector remains the same, while I must update DIR and UP. Below is the code I tried, but it doesn't seem to work properly. Any suggestions? Thanks. void Transform::left(float degrees, vec3& dir, vec3& up) { vec3 axis; axis = glm::normalize(up); mat3 R = rotate(-degrees, axis); dir = R*dir; dir = R*up; }; void Transform::up(float degrees, vec3& dir, vec3& up) { vec3 axis; axis=glm::normalize(glm::cross(dir,up)); mat3 R = rotate(-degrees, axis); dir = R*dir-; up = R*up; };

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >