Search Results

Search found 2513 results on 101 pages for 'opengl'.

Page 27/101 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Having issue with OpenGL 1.0 for HP slate 7

    - by Roy Coder
    I have issue with HP slate when i am trying to draw Line in OpenGL Draw method. But working in other devices. In Hp Slate Green line not drawn properly as like in another device. My Code is: gl.glPushMatrix(); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexFloatBuffer); gl.glColorMask(true, true, true, true); gl.glDepthMask(true); gl.glLineWidth(8.0f); setColor(gl); gl.glDrawArrays(GL10.GL_LINES, 0, fPoints.length / 2); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glPopMatrix(); Suggest me at which place i am wrong or missing something? UpdateImage

    Read the article

  • how to divide a window in openGL?

    - by tsubasa
    I want to divide the window into 2 parts. Each part I can draw a different thing. How can I do that in openGL ? (Actually, my problem is I already drawn a picture on the window. Now I want to get some "space" out of it so I can draw something else. The original picture already took the whole window). I appreciate if anybody could help. Thanks.

    Read the article

  • OpenGL texture shifted somewhat to the left when applied to a quad

    - by user308226
    I'm a bit new to OpenGL and I've been having a problem with using textures. The texture seems to load fine, but when I run the program, the texture displays shifted a couple pixels to the left, with the section cut off by the shift appearing on the right side. I don't know if the problem here is in the my TGA loader or if it's the way I'm applying the texture to the quad. Here is the loader: #include "texture.h" #include <iostream> GLubyte uncompressedheader[12] = {0,0, 2,0,0,0,0,0,0,0,0,0}; GLubyte compressedheader[12] = {0,0,10,0,0,0,0,0,0,0,0,0}; TGA::TGA() { } //Private loading function called by LoadTGA. Loads uncompressed TGA files //Returns: TRUE on success, FALSE on failure bool TGA::LoadCompressedTGA(char *filename,ifstream &texturestream) { return false; } bool TGA::LoadUncompressedTGA(char *filename,ifstream &texturestream) { cout << "G position status:" << texturestream.tellg() << endl; texturestream.read((char*)header, sizeof(header)); //read 6 bytes into the file to get the tga header width = (GLuint)header[1] * 256 + (GLuint)header[0]; //read and calculate width and save height = (GLuint)header[3] * 256 + (GLuint)header[2]; //read and calculate height and save bpp = (GLuint)header[4]; //read bpp and save cout << bpp << endl; if((width <= 0) || (height <= 0) || ((bpp != 24) && (bpp !=32))) //check to make sure the height, width, and bpp are valid { return false; } if(bpp == 24) { type = GL_RGB; } else { type = GL_RGBA; } imagesize = ((bpp/8) * width * height); //determine size in bytes of the image cout << imagesize << endl; imagedata = new GLubyte[imagesize]; //allocate memory for our imagedata variable texturestream.read((char*)imagedata,imagesize); //read according the the size of the image and save into imagedata for(GLuint cswap = 0; cswap < (GLuint)imagesize; cswap += (bpp/8)) //loop through and reverse the tga's BGR format to RGB { imagedata[cswap] ^= imagedata[cswap+2] ^= //1st Byte XOR 3rd Byte XOR 1st Byte XOR 3rd Byte imagedata[cswap] ^= imagedata[cswap+2]; } texturestream.close(); //close ifstream because we're done with it cout << "image loaded" << endl; glGenTextures(1, &texID); // Generate OpenGL texture IDs glBindTexture(GL_TEXTURE_2D, texID); // Bind Our Texture glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // Linear Filtered glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, type, width, height, 0, type, GL_UNSIGNED_BYTE, imagedata); delete imagedata; return true; } //Public loading function for TGA images. Opens TGA file and determines //its type, if any, then loads it and calls the appropriate function. //Returns: TRUE on success, FALSE on failure bool TGA::loadTGA(char *filename) { cout << width << endl; ifstream texturestream; texturestream.open(filename,ios::binary); texturestream.read((char*)header,sizeof(header)); //read 6 bytes into the file, its the header. //if it matches the uncompressed header's first 6 bytes, load it as uncompressed LoadUncompressedTGA(filename,texturestream); return true; } GLubyte* TGA::getImageData() { return imagedata; } GLuint& TGA::getTexID() { return texID; } And here's the quad: void Square::show() { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture.texID); //Move to offset glTranslatef( x, y, 0 ); //Start quad glBegin( GL_QUADS ); //Set color to white glColor4f( 1.0, 1.0, 1.0, 1.0 ); //Draw square glTexCoord2f(0.0f, 0.0f); glVertex3f( 0, 0, 0 ); glTexCoord2f(1.0f, 0.0f); glVertex3f( SQUARE_WIDTH, 0, 0 ); glTexCoord2f(1.0f, 1.0f); glVertex3f( SQUARE_WIDTH, SQUARE_HEIGHT, 0 ); glTexCoord2f(0.0f, 1.0f); glVertex3f( 0, SQUARE_HEIGHT, 0 ); //End quad glEnd(); //Reset glLoadIdentity(); }

    Read the article

  • OpenGL embedded in gtk has colour badly displayed

    - by Sardathrion
    Note that this is a re-write now that I have more clues as to where the problem could be... I am creating a GTK GUI which contains two embedded OpenGL displays. Both use the same shader code (complied once for each). On my normal hardware, this works fine. On a virtual machine running on the same hardware, I get horrible colours -- see images. I suspect that the shader code is at fault -- certainly dropping a simpler shader does make the problem moot. However, I do need both diffuse and spot lights in my shader thus making it non-trivial. Anyone has seen this before?

    Read the article

  • Getting normal information from OpenGL render output

    - by okamiueru
    I'll try to keep this simple. I want a way to access the normal information of the scene, from the Frame Buffer output (or similar). The same way one is able to access the Depth Buffer using glGetTexImage and GL_DEPTH_COMPONENT. I know I could set up a fragment shader which outputs the normal information in RGB color space, which could in turn be read from the rendered image. I'm wondering however if there is a way to do this within the openGL API. I'll clarify anything upon request as best as I can, Thank you

    Read the article

  • 2D Engine scrolling on OpenGL via hardware?

    - by drudru
    hi, I'm using OpenGL as the bottom end for a 2D tiling engine. When everything is 2D, it is simple to optimize certain issues. For example, scrolling. If I know a certain section of the screen needs to scroll off the bottom, then I can just blit over that portion. I'm evening moving more than 1 pixel at a time. Without explicit hardware support (think old nintendo hw), this requires a lot of pixel writes. An on chip bitblt would be the next best thing. Essentially, I'm looking at how I can optimize my GL calls to use VRAM texture renders as efficient hardware blits. Is it possible to have GL scroll the framebuffer, or should I just resign myself to double-buffering and re-rendering an entire scene for each frame? Thx

    Read the article

  • opengl es transparent fog in android

    - by Sponge
    I was wondering why the fog i use in opengl es on my android phone isn't transparent when i set the colors alpha to 0. I set the background to transparent and it works fine and the Color class or the toFloatBuffer() method are working fine for my meshes but when i set the fog color to transparent then this fact is ignored. here is the basic code i use for fog in the onSurfaceCreated() method of my renderer: gl.glFogf(GL10.GL_FOG_MODE, GL10.GL_LINEAR); gl.glFogf(GL10.GL_FOG_START, 4.0f); gl.glFogf(GL10.GL_FOG_END, 10.0f); gl.glFogfv(GL10.GL_FOG_COLOR, new Color(0,0,0,0).toFloatBuffer()); gl.glEnable(GL10.GL_FOG);

    Read the article

  • Freeglut ( OpenGL ) in C

    - by user1832149
    I would like to ask for you a little help in my homework. I'm learning at an university of Debrecen, and I would like to be a programmer, but I'm stuck with this homework. It's a OpenGL project in C. (freeglut) The next task is about: Window to viewport transformation. Draw graphs of functions, as shown below. The individual views (viewport) defines four different rectangles. Functions look like this picture: one window in 4 piece, and one pice of one function. The functions, respectively: f(x)=x2 g(x)=x3 h(x)=sin(x) i(x)=cos(x) Heres a pic what I need to make: How would I approach such a task?

    Read the article

  • Drawing unfilled rectangle shape in c++ openGL

    - by Bahaa
    I want to draw unfilled rectangle shape in openGL using c++ programming language but when I used the glBegin(GL_QUADS) or glBegin(GL_POLYGON), the resulted shape is filled but I want to be unfilled. How I can draw unfilled rectangle. void draweRect(void) { glClear(GL_COLOR_BUFFER_BIT); glColor3f(0.0,0.0,1.0); glLineWidth(30); glBegin(GL_POLYGON); glVertex2i(50,90); glVertex2i(100,90); glVertex2i(100,150); glVertex2i(50,150); glEnd(); glFlush(); }

    Read the article

  • OpenGL, draw two polygons in the same time (by mouse clicks)

    - by YoungSalafi
    im trying to draw 2 polygons at the same time depending on user input from the opengl screen... so i made 2 arrays which each one of them will carry the vertices of each polygon ... i think my logic is right but the program still prints only polygon and delete the old polygon if you draw a polygon again . and its acting weird too please check the code yourself here it is : P.S dont mind the delete function right now.. i know it missing something. #include <windows.h> #include <gl/gl.h> #include <gl/glut.h> void Draw(); void Set_Transformations(); void Initialize(int argc, char *argv[]); void OnKeyPress(unsigned char key, int x, int y); void DeleteVer(); void MouseClick(int bin, int state , int x , int y); void GetOGLPos(int x, int y,float* arrY,float* arrX); void DrawPolygon(float* arrX,float* arrY); float xPos[20]; float yPos[20]; float xPos2[20]; float yPos2[20]; float fx = 0,fy = 0; float size = 10; int count = 0; bool done = false; bool flag = true; void Initialize(int argc, char *argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA); glutInitWindowPosition(100, 100); glutInitWindowSize(600, 600); glutCreateWindow("OpenGL Lab1"); Set_Transformations(); glutDisplayFunc(Draw); glutMouseFunc(MouseClick); glutKeyboardFunc(OnKeyPress); glutMainLoop(); } void Set_Transformations() { glClearColor(1, 1, 1, 1); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(-200, 200, -200, 200); } void OnKeyPress(unsigned char key, int x, int y) { if (key == 27) exit(0); switch(key) { case 13: //enter key it will draw done = true; glutPostRedisplay(); flag=!flag; // this flag to switch to the other array that the vertices will be stored in, in order to draw the second polygon break; } } void MouseClick(int button, int state , int x , int y) { switch (button) { case GLUT_RIGHT_BUTTON: if (state == GLUT_DOWN) { if (count>0) { DeleteVer(); //dont mind this right now } } break; case GLUT_LEFT_BUTTON: if (state == GLUT_DOWN) { if(count<20) { if(flag =true){ // drawing first polygon GetOGLPos(x, y,xPos,yPos);} if (flag=false) //drawing second polygon after Enter is pressed GetOGLPos(x, y,xPos2,yPos2); } } break; } } void GetOGLPos(int x, int y,float* arrY,float* arrX) //getting the vertices from the user { GLint viewport[4]; GLdouble modelview[16]; GLdouble projection[16]; GLfloat winX, winY, winZ; GLdouble posX, posY, posZ; glGetDoublev( GL_MODELVIEW_MATRIX, modelview ); glGetDoublev( GL_PROJECTION_MATRIX, projection ); glGetIntegerv( GL_VIEWPORT, viewport ); winX = (float)x; winY = (float)viewport[3] - (float)y; glReadPixels( x, int(winY), 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ ); gluUnProject( winX, winY, winZ, modelview, projection, viewport, &posX, &posY, &posZ); arrX[count] = posX; arrY[count] = posY; count++; glPointSize( 6.0 ); glBegin(GL_POINTS); glVertex2f(posX,posY); glEnd(); glFlush(); } void DeleteVer(){ //dont mind this glColor3f ( 1, 1, 1); glBegin(GL_POINTS); glVertex2f(xPos[count-1],yPos[count-1]); glEnd(); glFlush(); xPos[count] = NULL; yPos[count] = NULL; count--; glColor3f ( 0, 0, 0); } void DrawPolygon(float* arrX,float* arrY) { int n=0; glColor3f ( 0, 0, 0); glBegin(GL_POLYGON); while(n<count) { glVertex2f(arrX[n],arrY[n]); n++; } count=0; glEnd(); glFlush(); } void Draw() //main drawing func { glClear(GL_COLOR_BUFFER_BIT); glColor3f(0, 0, 0); if(done) { DrawPolygon(xPos,yPos); DrawPolygon(xPos2,yPos2); } glFlush(); } int main(int argc, char *argv[]) { Initialize(argc, argv); return 0; }

    Read the article

  • Opengl ES and texcoord

    - by viraptor
    Hi, I've got some code which I would like to translate into Opengl ES. I'm not experienced with it however, so here it goes. The original code does a loop like this: glBegin(GL_TRIANGLES); for(i=0; i<num_triangles; i++) { glNormal(...); glTexCoord2f(...); glVerted3fv(...); glTexCoord2f(...); glVerted3fv(...); glTexCoord2f(...); glVerted3fv(...); } glEnd(); So that's ok - I can change the vertex handling for each triangle in the loop, into the standard: glEnableClientState (GL_VERTEX_ARRAY); glVertexPointer (3, GL_SOMETHING, 0, verts); glDrawArrays (GL_TRIANGLES, 0, 3); But how do I add the texcoord setting into this example?

    Read the article

  • OpenGL multitexture tessellation

    - by user1715296
    I have to tessellate some surface in OpenGL with rectangular textures. Let it be a single triangle for simplicity. The textures touch each other by sides, and do not overlap. That is done by setting GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T to GL_CLAMP_TO_BORDER and adjusting texture coords properly. Everything goes fine while GL_TEXTURE_MIN_FILTER and GL_TEXTURE_MAG_FILTER is set to GL_NEAREST, but when I want to apply GL_LINEAR filering and/or anisotropic filtering following arifact apperas: textures border pixel's alpha gradually fall to transparent, so that line of background color is visible between neighbouring textures. How can I avoid this artifact without merging multiple textures to one while linear filtering is preserved?

    Read the article

  • GLSL vertex shaders with movements vs vertex off the screen

    - by user827992
    If i have a vertex shader that manage some movements and variations about the position of some vertex in my OpenGL context, OpenGL is smart enough to just run this shader on only the vertex visible on the screen? This part of the OpenGL programmable pipeline is not clear to me because all the sources are not really really clear about this, they talk about fragments and pixels and I get that, but what about vertex shaders? If you need a reference i'm reading from this right now and this online book has a couple of examples about this.

    Read the article

  • Problems in exporting terrain from autodesk 3ds

    - by Jatin Kumar
    i am trying to make small counter strike sort of game and for the terrain part i have exported the terrain in 3ds format from Autodesk 3ds-max and imported the same in opengl using lib3ds. Its working fine but with few problems: The terrain is mainly made up of some cubical boxes with texture on them and placed on a big flat surface with boundary wall. In opengl i have enabled anti aliasing but still there is too much aliasing on the boundaries (visible when rotating the camera). I have tiled the floor with some image but in opengl it is just the single image stretched over the complete surface. I have exported animated model (Skelton+mesh+material+animation) from 3ds and used cal3d library for reading the same. Model has a gun also which is not appearing in opengl and it too has too much of aliasing problem. I have googled around but couldn't find any relevant solutions. Thanks in advance

    Read the article

  • Complete Guide/Tutorials on LWJGL?

    - by user43353
    Dont get me wrong, I finished these tutorials on http://lwjgl.org/wiki/index.php?title=Main_Page. I finished The Basics section, OpenGL 3.2 and newer section, and I looked at the Example Code section. They were great tutorials, and I have looked at the external tutorials as well. I don't know where to go from here, and OpenGL is not my strong point. Some one suggested Learning Modern 3D Graphics Programming, and I didnt learn much. I looked at the port to LWJGL, but the book was on C and I couldn't really understand what the OpenGL meant. I am trying to learn 2D gaming, not 3D. Maybe later. Is there any tutorials that aren't C/C++ heavy and teach you 2D OpenGL?

    Read the article

  • Why won't my vertex buffer render in GLFW3?

    - by sm81095
    I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it or how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f(), and such), I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • Is there an alternative to SDL 1.3 for a C++ game that should run on iOS and Android?

    - by futlib
    I've used SDL for many desktop games, always as the cross-platform glue for: Creating a window Processing input Rendering images Rendering fonts Playing sounds/music It has never disappointed me at those tasks. But when it comes to graphics, I prefer to work with the OpenGL API directly, even though all of our games are 2D. In the project I'm currently working on, I've made sure to only use the API subset supported by both OpenGL 1.3 and OpenGL 1.0, so making the thing run on Android should be easy, I thought. Turns out there is no official Android or iOS port of SDL yet. However, there's one in SDL 1.3, which is still in development. SDL 1.3 doesn't seem very appealing to me for three reasons: It's been in development for at least 4 years, and I have no idea when it will be done, not to mention stable. It's not ported to as many platforms as SDL 1.2. From what I've seen, it uses OpenGL for drawing, so I suppose the community will move away from directly using OpenGL. So I'm wondering if I should use a different library for our current project - it doesn't matter much if I need to port my existing code from SDL 1.2 to SDL 1.3 or to some other library. We're planning to release on Windows, Mac OS X, Linux, iOS and Android, so good support for these platforms is essential. Is there anything stable that does what I want?

    Read the article

  • What are the factors that determine the default frequency of a shader call?

    - by user827992
    After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT_ELAPSED_TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer/clock? PS I'm referring to OpenGL 3.0+

    Read the article

  • Use Android NDK for portability with iOS?

    - by J-F L-R
    I am currently planning to implement a little painting app using OpenGL ES 1.1. I believe this question applies to any OpenGL ES project. I am starting development on Android and I would like to know if you would recommend writing the drawing logic (using OpenGL) in C++ with the NDK so it will easier to port to iOS, or to use the Java API and being locked on Android. The reason I am asking that is because I have seen mixed opinions on the Web about using the NDK (some people say it is an added level of complexity). From what I have already seen, I believe that I should go with the Java API since I am starting on Android and then, if I decide to go on iOS, to rewrite the OpenGL logic in Objective-C or C++. This should be pretty straightforward since the calls appear to be the same in both languages. What do you think? Am I right?

    Read the article

  • Flash like animation editing and container format for OpenGL environment?

    - by tbarbe
    Are there ANY tools that lets an animator / designer create scripted animations that can export to an OpenGL compatible format -- that are similar to the timeline editing in Flash or After Effects? Does OpenGL ES have some kind of animation playback or container format? ( is there something similar to .swf for OpenGL? ) Im looking for something that lets a designer / animator do his work with a timeline and in a traditional animation environment... meanwhile still having integration with OpenGL.

    Read the article

  • How to draw an Arc in OpenGL

    - by rpgFANATIC
    While making a little Pong game in C++ OpenGL, I decided it'd be fun to create arcs (semi-circles) when stuff bounces. I decided to skip Bezier curves for the moment and just go with straight algebra, but I didn't get far. My algebra follows a simple quadratic function (y = +- sqrt(mx+c)). This little excerpt is just an example I've yet to fully parameterize, I just wanted to see how it would look. When I draw this, however, it gives me a straight vertical line where the line's tangent line approaches -1.0 / 1.0. Is this a limitation of the GL_LINE_STRIP style or is there an easier way to draw semi-circles / arcs? Or did I just completely miss something obvious? void Ball::drawBounce() { float piecesToDraw = 100.0f; float arcWidth = 10.0f; float arcAngle = 4.0f; glBegin(GL_LINE_STRIP); for (float i = 0.0f; i < piecesToDraw; i += 1.0f) // Positive Half { float currentX = (i / piecesToDraw) * arcWidth; glVertex2f(currentX, sqrtf((-currentX * arcAngle)+ arcWidth)); } for (float j = piecesToDraw; j > 0.0f; j -= 1.0f) // Negative half (go backwards in X direction now) { float currentX = (j / piecesToDraw) * arcWidth; glVertex2f(currentX, -sqrtf((-currentX * arcAngle) + arcWidth)); } glEnd(); } Thanks in advance.

    Read the article

  • Should I use OpenGL for chess with animations?

    - by fhucho
    At the moment I am experimenting with SurfaceView for my chess game with animations. I am getting only about 8 FPS in the emulator. I draw a chess board and 32 chess pieces and rotate everything (to see how smooth it is), I am using antialiasing. On the Droid I'm getting about 20FPS, so it's not very smooth. Is it possible to implement a game with very scarce and simple animations without having to use OpenGL? This is what I do every frame: // scale and rotate matrix.setScale(scale, scale); rotation += 3; matrix.postRotate(rotation, 152, 152); canvas = surfaceHolder.lockCanvas(); canvas.setDrawFilter(new PaintFlagsDrawFilter(0, Paint.FILTER_BITMAP_FLAG)); canvas.setMatrix(matrix); canvas.drawARGB(255, 255, 255, 255); // fill the canvas with white for (int i = 0; i < sprites.size(); i++) { sprites.get(i).draw(canvas); // draws chessboard and chess pieces }

    Read the article

  • SDL_image/C++ OpenGL Program: IMG_Load() produces fuzzy images

    - by Kami
    I'm trying to load an image file and use it as a texture for a cube. I'm using SDL_image to do that. I used this image because I've found it in various file formats (tga, tif, jpg, png, bmp) The code : SDL_Surface * texture; //load an image to an SDL surface (i.e. a buffer) texture = IMG_Load("/Users/Foo/Code/xcode/test/lena.bmp"); if(texture == NULL){ printf("bad image\n"); exit(1); } //create an OpenGL texture object glGenTextures(1, &textureObjOpenGLlogo); //select the texture object you need glBindTexture(GL_TEXTURE_2D, textureObjOpenGLlogo); //define the parameters of that texture object //how the texture should wrap in s direction glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //how the texture should wrap in t direction glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); //how the texture lookup should be interpolated when the face is smaller than the texture glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); //how the texture lookup should be interpolated when the face is bigger than the texture glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); //send the texture image to the graphic card glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h, 0, GL_RGB, GL_UNSIGNED_BYTE, texture-> pixels); //clean the SDL surface SDL_FreeSurface(texture); The code compiles without errors or warnings ! I've tired all the files formats but this always produces that ugly result : I'm using : SDL_image 1.2.9 & SDL 1.2.14 with XCode 3.2 under 10.6.2 Does anyone knows how to fix this ?

    Read the article

  • Opengl Triangle instead of square

    - by Dave
    Im trying to create a spinning square inside of xcode using opengl but instead for some reason I have a spinning triangle? I'm doing this inside of sio2 but I dont think this is the problem. Here is the triangle: http://img220.imageshack.us/img220/7051/snapzproxscreensnapz001.png Here is my code: void templateRender( void ) { const GLfloat squareVertices[] ={ 100.0f, -100.0f, 100.0f, -100.0f, -100.0f, 100.0f, 100.0f, 100.0f, }; const unsigned char squareColors[] = { 255, 255, 0, 255, 0, 255, 255, 255, 0, 0, 0, 0, 255, 0, 255, 255, }; glMatrixMode( GL_MODELVIEW ); glLoadIdentity(); glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT ); // Your rendering code here... sio2WindowEnter2D( sio2->_SIO2window, 0.0f, 1.0f ); { glVertexPointer( 2, GL_FLOAT, 0, squareVertices ); glEnableClientState(GL_VERTEX_ARRAY); //set up the color array glColorPointer( 4, GL_UNSIGNED_BYTE, 0, squareColors ); glEnableClientState( GL_COLOR_ARRAY ); glTranslatef( sio2->_SIO2window->scl->x * 0.5f, sio2->_SIO2window->scl->y * 0.5f, 0.0f ); static float rotz = 0.0f; glRotatef( rotz, 0.0f, 0.0f, 1.0f ); rotz += 90.0f * sio2->_SIO2window->d_time; glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } sio2WindowLeave2D(); }

    Read the article

  • OpenGL bitmap text fails after drawing polygon

    - by kaykun
    I'm using Win32 and OpenGL to to draw text onto a window. I'm using the bitmap font method, with wglUseFontBitmaps. Here is my main rendering function: glClear(GL_COLOR_BUFFER_BIT); glPushMatrix(); glColor3f(1.0f, 0.0f, 1.0f); glBegin(GL_QUADS); glVertex2f(0.0f, 0.0f); glVertex2f(128.0f, 0.0f); glVertex2f(128.0f, 128.0f); glVertex2f(0.0f, 128.0f); glEnd(); glPopMatrix(); glPushMatrix(); glColor3f(1.0f, 1.0f, 1.0f); glRasterPos2i(200, 200); glListBase(fontList); glCallLists(5, GL_UNSIGNED_BYTE, "Test."); glPopMatrix(); SwapBuffers(hDC); As you can see it's very simple and the only thing that it's supposed to do is draw a quadrilateral and draw the text "Test.". But the problem is that drawing a polygon seems to mess up any text operations I try to do after it. If I place the text drawing functions before the polygon, both the text and the polygon draw fine. Is there something I'm missing here? Edit: This problem only happens when the window is run in Fullscreen, by ChangeDisplaySettings. Any reason why this would be??

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >