Search Results

Search found 2513 results on 101 pages for 'opengl 3'.

Page 70/101 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • uniform generation of 3D points on cylinder/cone

    - by Myx
    Hello: I wish to randomly and uniformly generate points on a cylinder and a cone (separately). The cylinder is defined by its center, its radius and height. Same specifications for the cone. I am able to get the bounding box for each shape so I was thinking of generating points within the bounding box. However, I'm not sure how to project them onto the cylinder/cone or if this is the best idea. Any suggestions? Thanks.

    Read the article

  • Bitmapfontatlas cocos2d performance issue

    - by meboz
    This question was also asked in the cocos2d forums but they are a little slower than here. Hi, Im trying to resolve a performance issue in my game. I have all of my game images on the one spritesheet. I now have a score label for which i have generated a font file with the Hiero tool. Ideally I'd like to update my score label with the current score on every update. However there seems to be a significant performance hit when doing so, which i believe is the result of the game images texture and the font texture swapping each update. Can anyone suggest a way to avoid this? Possibly by combining the font images with the game images in the one image file to avoid the texture swapping? Cheers

    Read the article

  • How to get a flat, non-interpolated color when using vertex shaders.

    - by Brett
    Hi, Is there a way to achieve this? If I draw lines like this glShadeModel(GL_FLAT); glBegin(GL_LINES); glColor3f(1.0, 1.0, 0.0); glVertex3fv(bottomLeft); glVertex3fv(topRight); glColor3f(1.0, 0.0, 0.0); glVertex3fv(topRight); glVertex3fv(topLeft); . . (draw a square) . . glEnd(); I get the desired result (a different colour for each edge) but I want to be able to calculate the fragment values in a shader. If I do the same after setting up my shader program I always get interpolated colors between vertices. Is there a way around this? (would be even better if I could get the same results using quads) Thanks

    Read the article

  • Handling inverse kinematics: animation blending or math?

    - by meds
    I've been working for the past four days on inverse kinematics for my game engine. I'm working on a game with a shoestring budget so when the idea of inverse kinematics came up I knew I had to make it such that the 3D models bones would be mathematically changed to appear to be stepping on objects. This is causing some serious problems with my animation, after it was technically implemented the animations started looking quite bad when the character was wlaking up inclines or steps even though mathematically the stepping was correct and was even smoothly interpolating. So I was wondering, is it actually possible to get a smooth efficient inverse kinematic system based exclusively on math where bones are changed or is this just a wild goose chase and I should either solve the inverse kinematics problem with animation blending or don't do it at all?

    Read the article

  • Using glRotate and glTranslate with collision detection.

    - by Cetra
    Hey guys, Say I use glRotate to translate the current view based on some arbitrary user input (i.e, if key left is pressed then rtri+=2.5f) glRotatef(rtri,0.0f,1.0f,0.0f); Then I draw the triangle in the rotated position: glBegin(GL_TRIANGLES); // Drawing Using Triangles glVertex3f( 0.0f, 1.0f, 0.0f); // Top glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom Left glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right glEnd(); // Finished Drawing The Triangle How do I get the resulting translated vertexes for use in collision detection? Or will I have to manually apply the transform myself and thus doubling up the work? The reason I ask is that I wouldn't mind implementing display lists.

    Read the article

  • Lighting does not work with gluSphere

    - by badcodenotreat
    This is a simple issue that I'm somewhat ashamed to ask for help on. I'm making a simple call to gluSphere to render a sphere, however, it does not light properly even though I'm pretty sure I added the normals and lighting correctly. If, however, I add a texture, the model lights normally, except it seems to be always SMOOTH, and I cannot change it to flat. This is the lighting code in my init() function: gl.glLightfv( GL.GL_LIGHT0, GL.GL_AMBIENT , AMBIENT_LIGHT, 0 ); gl.glLightfv( GL.GL_LIGHT0, GL.GL_DIFFUSE , DIFFUSE_LIGHT, 0 ); gl.glLightfv( GL.GL_LIGHT0, GL.GL_POSITION, light_pos , 0 ); gl.glEnable ( GL.GL_LIGHT0 ); gl.glEnable ( GL.GL_LIGHTING ); this is my sphere code in my display() function: gl.glColor3d(1.0, 1.0, 1.0); glu.gluQuadricDrawStyle (quad, GLU.GLU_FILL); glu.gluQuadricNormals (quad, GLU.GLU_FLAT); glu.gluQuadricOrientation(quad, GLU.GLU_OUTSIDE); glu.gluSphere(quad, 1.0, lat, lon); Please advise.

    Read the article

  • How to implement "circular side-scrolling" in my game?

    - by Mr.Gando
    I'm developing a game, a big part of this game, is about scrolling a "circular" background ( the right end of the Background Image can connect with the left start of the Background image ). Should be something like this: ( Entity moving and arrow to show where the background should start to repeat ) This happens in order to allow to have an Entity walking, and the background repeating itself over and over again. I'm not working with tile-maps, the background is a simple Texture (400x300 px). Could anyone point me to a link , or tell me the best way I could accomplish this ? Thanks a lot.

    Read the article

  • Shape object in Processing, translate individual shapes.

    - by Zain
    I am relatively new to Processing but have been working in Java for about 2 years now. I am facing difficulty though with the translate() function for objects as well as objects in general in processing. I went through the examples and tried to replicate the manners by which they instantiated the objects but cannot seem to even get the shapes to appear on the screen no less move them. I instantiate the objects into an array using a nested for loop and expect a grid of the objects to be rendered. However, nothing at all is rendered. My nested for loop structure to instantiate the tiles: for(int i=0; i<102; i++){ for(int j=0; j<102; j++){ tiles[i][j]=new tile(i,0,j); tiles[i][j].display(); } } And the constructors for the tile class: tile(int x, int y, int z){ this.x=x; this.y=y; this.z=z; beginShape(); vertex(x,y,z); vertex(x+1,y,z); vertex(x+1,y,z-1); vertex(x,y,z-1); endShape(); } Nothing is rendered at all when this runs. Furthermore, if this is of any concern, my translations(movements) are done in a method I wrote for the tile class called move which simply calls translate. Is this the correct way? How should one approach this? I can't seem to understand at all how to render/create/translate individual objects/shapes. Thanks for any help any of you are able to provide!

    Read the article

  • NSOpenGLView in NSSplitView

    - by Remizorrr
    When i put an NSOpenglView in NSSplitView, a problem occurs while dragging splitter. The openGLView and SplitView are resizing asynchronously. i found a solution in apple mail list thread http://developer.apple.com/mac/library/samplecode/GLChildWindowDemo/Introduction/Intro.html and i found a solution with some carbon calls. but now i get link error (only in release mode). so i'v got two questions - is there any cocoa way to fix the splitter - gl problem? if no - how can i fix carbon linker errors in release mode?

    Read the article

  • Using GLOrtho to view Side, Front, Top perspectives of a 3D scene

    - by talldan
    Dear all, I'm building a game level editing app as part of a university project. In my application I have multiple viewports, a Perspective viewport and three orthographic views all setup to view the same scene. I've successfuly setup the orthographic views and can translate and scale them to mimic scrolling and zooming. Unfortunately, I'm having one problem - my scene still contains 3 dimensions, so objects viewed in orthographic mode of certain depths are clipped when they fall outside of my clipping volume. Most 3D authoring tools or level editors allow you to view all objects in orthographic mode regardless of depth. I guess what I need to do is scale my scene in the appropriate dimension so that all values lie between 1 and -1, is there a straightforward way of going about this? Or is there a different better approach. Thanks very much for your help, Dan

    Read the article

  • NAN mixing float and GLFloat?

    - by carrots
    This often returns NAN ("Not A Number") depending on input: #define PI 3.1415f GLfloat sineEaseIn(GLfloat ratio) { return 1.0f-cosf(ratio * (PI / 2.0f)); } I tried making PI a few digits smaller to see if that would help. No dice. Then I thought it might be a datatype mismatch, but float and glfloat seem to be equivalent: gl.h typedef float GLfloat; math.h extern float cosf( float ); Is this a casting issue?

    Read the article

  • glCreateShaderObjectARB( GL_FRAGMENT_SHADER_ARB ); crashes !

    - by gutsblow
    Hello there, I have an iMac with ATI Radeon 2600HD which supports Fragment_shader_arb. But whenever I use that function, it crashes the program. Ironically, it works on a windows installation in the same machine without any problems. I'm running OS X 10.6.2; Can anyone please help me out! P.S. Vertex shaders work fine without any problem. Thank you!

    Read the article

  • 2D distortion of a face from an image on iOS? (similar to Fat Booth etc.)

    - by Dominik Hadl
    I was just wondering if someone knows about some good library or tutorial on how to achieve a 2D distortion of a face taken from an image taken by the user. I would like to achieve a similar effect to the one in Fatify, Oldify, all those Fat Booths, etc., because I am creating an app where you will throw something at the face and I would the face to jiggle and move when the object hits it. How should I do this?

    Read the article

  • How do I get the current color of a fragment?

    - by Mason Wheeler
    I'm trying to wrap my head around shaders in GLSL, and I've found some useful resources and tutorials, but I keep running into a wall for something that ought to be fundamental and trivial: how does my fragment shader retrieve the color of the current fragment? You set the final color by saying gl_FragColor = whatever, but apparently that's an output-only value. How do you get the original color of the input so you can perform calculations on it? That's got to be in a variable somewhere, but if anyone out there knows its name, they don't seem to have recorded it in any tutorial or documentation that I've run across so far, and it's driving me up the wall.

    Read the article

  • 3D Sphereical Terrain with an 8 mesh sphere. The edges of the mesh are obvioulsy seen and I'm not su

    - by Justin808
    Hi :) I'm working in Unity3D, but my issue is with 3D meshes. I'm hoping someone here can help or point me in the right direction. I have 2 version of code, http://www.pasteit4me.com/695002 (old) and http://www.pasteit4me.com/690003 (new). The old code, makes a single mesh sphere and creates a terrain on it. The new code makes an 8 mesh sphere and creates a terrain on it. On the new version the edges of the meshes are obviously seen and I'm not sure why. It looks like the edges are adjusted no much, almost 2-3 times more than they should have been. GenerateB() in the old code and Generate() in the new code creates the sphere. MakeTerrain() in both create the terrain. If I dont run the MakeTerrain() function the new sphere looks like a solid mesh. I'm not sure where to start looking in the MakeTerrain() function in the new code to solve the issue :-/ Any ideas? An image of the issue is at http://img28.imageshack.us/img28/3784/screenshot20100611at850.png.

    Read the article

  • Problem Loading multiple textures using multiple shaders with GLSL

    - by paj777
    I am trying to use multiple textures in the same scene but no matter what I try the same texture is loaded for each object. So this what I am doing at the moment, I initialise each shader: rightWall.SendShaders("wall.vert","wall.frag","brick3.bmp", "wallTex", 0); demoFloor.SendShaders("floor.vert","floor.frag","dirt1.bmp", "floorTex", 1); The code in SendShaders is: GLuint vert,frag; glEnable(GL_DEPTH_TEST); glEnable(GL_TEXTURE_2D); char *vs = NULL,*fs = NULL; vert = glCreateShader(GL_VERTEX_SHADER); frag = glCreateShader(GL_FRAGMENT_SHADER); vs = textFileRead(vertFile); fs = textFileRead(fragFile); const char * ff = fs; const char * vv = vs; glShaderSource(vert, 1, &vv, NULL); glShaderSource(frag, 1, &ff, NULL); free(vs); free(fs); glCompileShader(vert); glCompileShader(frag); program = glCreateProgram(); glAttachShader(program, frag); glAttachShader(program, vert); glLinkProgram(program); glUseProgram(program); LoadGLTexture(textureImage, texture); GLint location = glGetUniformLocation(program, textureName); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture); glUniform1i(location, 0); And then in the main loop: rightWall.UseShader(); rightWall.Draw(); demoFloor.UseShader(); demoFloor.Draw(); Which ever shader is initialised last is the texture which is used for both objects. Thank you for your time and I appreciate any comments.

    Read the article

  • Texture coordintes for a polygon and a square texture

    - by user146780
    basically I have a texture. I also have a lets say octagon (or any polygon). I find that octagon's bounding box. Let's say my texture is the size of the octagon's bounding box. How could I figure out the texture coordinates so that the texture maps to it. To clarify, lets say you had a square of tin foil and cut the octagon out you'd be left with a tin foil textured polygon.I'm just not sure how to figure it out for an arbitrary polygon. Thanks

    Read the article

  • Adding a decal using multitexturing on an iPhone

    - by Axis
    I'm trying to overlay one image on top of another onto a simple quad. I set my bottom image as texture unit 0, and then my top image (which has a variable alpha) as texture unit 1. Unit 2 has mode GL_DECAL, which means the bottom texture should show up when the alpha is 0, and the top texture should show when the alpha is 1. But, only the top texture shows up and the bottom one doesn't appear at all. It's just white where the bottom texture should show through. glGetError() doesn't report any problems. Any help is appreciated. Thanks! glVertexPointer(3, GL_FLOAT, 0, boxVertices); glEnableClientState(GL_VERTEX_ARRAY); glClientActiveTexture(GL_TEXTURE0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glTexCoordPointer(2, GL_FLOAT, 0, boxTextureCoords); glClientActiveTexture(GL_TEXTURE1); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glTexCoordPointer(2, GL_FLOAT, 0, boxTextureCoords); glClientActiveTexture(GL_TEXTURE0); glEnable(GL_TEXTURE_2D); glClientActiveTexture(GL_TEXTURE1); glEnable(GL_TEXTURE_2D); glClientActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, one.texture); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glClientActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, two.texture); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glDrawArrays(GL_TRIANGLE_FAN, 0, 4);

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >