Search Results

Search found 3875 results on 155 pages for 'opengl es lighting'.

Page 35/155 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • problems with openGl on eclipse

    - by lego69
    I'm working on Windows XP I have portable version of Eclipse Galileo, but I didn't find there glut so I decided to add it using this link I made all steps and and now I'm trying to compile this code #include "GL/glut.h" #include "GL/gl.h" #include "GL/glu.h" /////////////////////////////////////////////////////////// // Called to draw scene void RenderScene(void) { // Clear the window with current clearing color glClear(GL_COLOR_BUFFER_BIT); // Flush drawing commands glFlush(); } /////////////////////////////////////////////////////////// // Setup the rendering state void SetupRC(void) { glClearColor(0.0f, 0.0f, 1.0f, 1.0f); } /////////////////////////////////////////////////////////// // Main program entry point void main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(800,600); glutCreateWindow("Simple"); glutDisplayFunc(RenderScene); SetupRC(); glutMainLoop(); } and I have this errors Simple.o: In function `RenderScene': C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:16: undefined reference to `_imp__glClear' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:20: undefined reference to `_imp__glFlush' Simple.o: In function `SetupRC': C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:27: undefined reference to `_imp__glClearColor' Simple.o: In function `main': C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:34: undefined reference to `glutInit' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:35: undefined reference to `glutInitDisplayMode' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:36: undefined reference to `glutInitWindowSize' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:37: undefined reference to `glutCreateWindow' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:38: undefined reference to `glutDisplayFunc' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:42: undefined reference to `glutMainLoop' collect2: ld returned 1 exit status please can somebody help me, thanks in advance

    Read the article

  • Using the FreeType lib to create text bitmaps to draw in OpenGL 3.x

    - by Andy
    At the moment I not too sure where my problem is. I can draw loaded images as textures no problem, however when I try to generate a bitmap with a char on it I just get a black box. I am confident that the problem is when I generate and upload the texture. Here is the method for that; the top section of the if statement just draws an texture of a image loaded from file (res/texture.jpg) and that draws perfectly. And the else part of the if statement will try to generate and upload a texture with the char (variable char enter) on. Source Code, I will add shaders and more of the C++ if needed but they work fine for the image. void uploadTexture() { if(enter=='/'){ // Draw the image. GLenum imageFormat; glimg::SingleImage image = glimg::loaders::stb::LoadFromFile("res/texture.jpg")->GetImage(0,0,0); glimg::OpenGLPixelTransferParams params = glimg::GetUploadFormatType(image.GetFormat(), 0); imageFormat = glimg::GetInternalFormat(image.GetFormat(),0); glGenTextures(1,&textureBufferObject); glBindTexture(GL_TEXTURE_2D, textureBufferObject); glimg::Dimensions dimensions = image.GetDimensions(); cout << "Texture dimensions w "<< dimensions.width << endl; glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, dimensions.width, dimensions.height, 0, params.format, params.type, image.GetImageData()); }else { // Draw the char useing the FreeType Lib FT_Init_FreeType(&ft); FT_New_Face(ft, "arial.ttf", 0, &face); FT_Set_Pixel_Sizes(face, 0, 48); FT_GlyphSlot g = face->glyph; glGenTextures(1,&textureBufferObject); glBindTexture(GL_TEXTURE_2D, textureBufferObject); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); FT_Load_Char(face, enter, FT_LOAD_RENDER); FT_Bitmap theBitmap = g->bitmap; int BitmapWidth = g->bitmap.width; int BitmapHeight = g->bitmap.rows; cout << "draw char - " << enter << endl; cout << "g->bitmap.width - " << g->bitmap.width << endl; cout << "g->bitmap.rows - " << g->bitmap.rows << endl; int TextureWidth =roundUpToNextPowerOfTwo(g->bitmap.width); int TextureHeight =roundUpToNextPowerOfTwo(g->bitmap.rows); cout << "texture width x height - " << TextureWidth <<" x " << TextureHeight << endl; GLubyte* TextureBuffer = new GLubyte[ TextureWidth * TextureWidth ]; for(int j = 0; j < TextureHeight; ++j) { for(int i = 0; i < TextureWidth; ++i) { TextureBuffer[ j*TextureWidth + i ] = (j >= BitmapHeight || i >= BitmapWidth ? 0 : g->bitmap.buffer[ j*BitmapWidth + i ]); } } glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, TextureWidth, TextureHeight, 0, GL_RGB8, GL_UNSIGNED_BYTE, TextureBuffer); } }

    Read the article

  • OpenGL Video RAM Limits

    - by Tamir
    I have been trying to make a Cross-platform 2D Online Game, and my maps are made of tiles. My tileset, which I render the tiles from, is quite huge. I wanted to know how can I disable hardware rendering, or at least making it more capable. Hence, I wanted to know what are the basic limits of the video ram, as far as I know, Direct3D has a texture size limits (by that I don't mean the power-of-two texture sizes).

    Read the article

  • OpenGL/GLSL checking if shader compiled fine on intel cards

    - by clamp
    hello, i am using this code to check if my glsl shader compiled fine. glGetObjectParameterivARB(obj, GL_OBJECT_INFO_LOG_LENGTH_ARB, &infologLength); if (infologLength > 1) { int charsWritten = 0; char * const infoLog = new char[infologLength]; glGetInfoLogARB(obj, infologLength, &charsWritten, infoLog); tError(infoLog, false); delete infoLog; } } the length of the returned string is empty on nvidia and ATI cards, but on intel cards this one returns the string "no errors." now what is the best way to find out, if there are really no errors? should i just check for this string? or is there a convention what this function glGetInfoLogARB should return?

    Read the article

  • background colour in opengl

    - by lego69
    I want to change background color of the window after pressing the button, but my program doesn't work, can somebody tell me why, thanks in advance int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(800, 600); glutInitWindowPosition(300,50); glutCreateWindow("GLRect"); glClearColor(1.0f, 0.0f, 0.0f, 1.0f); <--- glutDisplayFunc(RenderScene); glutReshapeFunc(ChangeSize); glutMainLoop(); system("pause"); glClearColor(0.0f, 1.0f, 0.0f, 1.0f); <--- return 0; }

    Read the article

  • Help understanding some OpenGL stuff

    - by shinjuo
    I am working with some code to create a triangle that moves with arrow keys. I want to create a second object that moves independently. This is where I am having trouble, I have created the second actor, but cannot get it to move. There is too much code to post it all so I will just post a little and see if anyone can help at all. ogl_test.cpp #include "platform.h" #include "srt/scheduler.h" #include "model.h" #include "controller.h" #include "model_module.h" #include "graphics_module.h" class blob : public actor { public: blob(float x, float y) : actor(math::vector2f(x, y)) { } void render() { transform(); glBegin(GL_TRIANGLES); glVertex3f(0.25f, 0.0f, -5.0f); glVertex3f(-.5f, 0.25f, -5.0f); glVertex3f(-.5f, -0.25f, -5.0f); glEnd(); end_transform(); } void update(controller& c, float dt) { if (c.left_key) { rho += pi / 9.0f * dt; c.left_key = false; } if (c.right_key) { rho -= pi / 9.0f * dt; c.right_key = false; } if (c.up_key) { v += .1f * dt; c.up_key = false; } if (c.down_key) { v -= .1f * dt; if (v < 0.0) { v = 0.0; } c.down_key = false; } actor::update(c, dt); } }; class enemyOne : public actor { public: enemyOne(float x, float y) : actor(math::vector2f(x, y)) { } void render() { transform(); glBegin(GL_TRIANGLES); glVertex3f(0.25f, 0.0f, -5.0f); glVertex3f(-.5f, 0.25f, -5.0f); glVertex3f(-.5f, -0.25f, -5.0f); glEnd(); end_transform(); } void update(controller& c, float dt) { if (c.left_key) { rho += pi / 9.0f * dt; c.left_key = false; } if (c.right_key) { rho -= pi / 9.0f * dt; c.right_key = false; } if (c.up_key) { v += .1f * dt; c.up_key = false; } if (c.down_key) { v -= .1f * dt; if (v < 0.0) { v = 0.0; } c.down_key = false; } actor::update(c, dt); } }; int APIENTRY WinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance, char* lpCmdLine, int nCmdShow ) { model m; controller control(m); srt::scheduler scheduler(33); srt::frame* model_frame = new srt::frame(scheduler.timer(), 0, 1, 2); srt::frame* render_frame = new srt::frame(scheduler.timer(), 1, 1, 2); model_frame->add(new model_module(m, control)); render_frame->add(new graphics_module(m)); scheduler.add(model_frame); scheduler.add(render_frame); blob* prime = new blob(0.0f, 0.0f); m.add(prime); m.set_prime(prime); enemyOne* primeTwo = new enemyOne(2.0f, 0.0f); m.add(primeTwo); m.set_prime(primeTwo); scheduler.start(); control.start(); return 0; } model.h #include <vector> #include "vec.h" const double pi = 3.14159265358979323; class controller; using math::vector2f; class actor { public: vector2f P; float theta; float v; float rho; actor(const vector2f& init_location) : P(init_location), rho(0.0), v(0.0), theta(0.0) { } virtual void render() = 0; virtual void update(controller&, float dt) { float v1 = v; float theta1 = theta + rho * dt; vector2f P1 = P + v1 * vector2f(cos(theta1), sin(theta1)); if (P1.x < -4.5f || P1.x > 4.5f) { P1.x = -P1.x; } if (P1.y < -4.5f || P1.y > 4.5f) { P1.y = -P1.y; } v = v1; theta = theta1; P = P1; } protected: void transform() { glPushMatrix(); glTranslatef(P.x, P.y, 0.0f); glRotatef(theta * 180.0f / pi, 0.0f, 0.0f, 1.0f); //Rotate about the z-axis } void end_transform() { glPopMatrix(); } }; class model { private: typedef std::vector<actor*> actor_vector; actor_vector actors; public: actor* _prime; model() { } void add(actor* a) { actors.push_back(a); } void set_prime(actor* a) { _prime = a; } void update(controller& control, float dt) { for (actor_vector::iterator i = actors.begin(); i != actors.end(); ++i) { (*i)->update(control, dt); } } void render() { for (actor_vector::iterator i = actors.begin(); i != actors.end(); ++i) { (*i)->render(); } } };

    Read the article

  • Problem displaying Vertex Buffer Object (OpenGL and Obj-C)

    - by seaworthy
    Hey, I am having a problem displaying or loading a buffer with an array of vertices. I know that array works fine because I am able to render it using a loop and a glVertex command. I can't figure out what's wrong. Your insight is highly appreciated. GLuint vboId; glGenBuffers( 1, &vboId ); glBindBuffer( GL_ARRAY_BUFFER, vboId); glBufferData( GL_ARRAY_BUFFER, count*sizeof( GLfloat ),array,GL_STATIC_DRAW_ARB ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); printf("%d\n",count); glEnableClientState( GL_VERTEX_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, vboId ); glVertexPointer( 3, GL_FLOAT, 0, 0 ); glDisableClientState( GL_VERTEX_ARRAY ); printf("vboId: [%hd]",vboId); glDeleteBuffers(1, &vboId); Help?

    Read the article

  • OpenGL Vertex Buffer Object code giving bad output.

    - by Matthew Mitchell
    Hello. My Vertex Buffer Object code is supposed to render textures nicely but instead the textures are being rendered oddly with some triangle shapes. What happens - http://godofgod.co.uk/my_files/wrong.png What is supposed to happen - http://godofgod.co.uk/my_files/right.png This function creates the VBO and sets the vertex and texture coordinate data: extern "C" GLuint create_box_vbo(GLdouble size[2]){ GLuint vbo; glGenBuffers(1,&vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); GLsizeiptr data_size = 8*sizeof(GLdouble); GLdouble vertices[] = {0,0, 0,size[1], size[0],0, size[0],size[1]}; glBufferData(GL_ARRAY_BUFFER, data_size, vertices, GL_STATIC_DRAW); data_size = 8*sizeof(GLint); GLint textcoords[] = {0,0, 0,1, 1,0, 1,1}; glBufferData(GL_ARRAY_BUFFER, data_size, textcoords, GL_STATIC_DRAW); return vbo; } Here is some relavant code from another function which is supposed to draw the textures with the VBO. glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glColor4d(1,1,1,a/255); glBindTexture(GL_TEXTURE_2D, texture); glTranslated(offset[0],offset[1],0); glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexPointer(2, GL_DOUBLE, 0, 0); glEnableClientState(GL_VERTEX_ARRAY); glTexCoordPointer (2, GL_INT, 0, 0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glDrawArrays(GL_TRIANGLES, 0, 3); glDrawArrays(GL_TRIANGLES, 1, 3); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_VERTEX_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, 0); I would have hoped for the code to use the first three coordinates (top-left,bottom-left,top-right) and the last three (bottom-left,top-right,bottom-right) to draw the triangles with the texture data correctly in the most efficient way. I don't see why triangles should make it more efficient but apparently that's the way to go. It, of-course, fails for some reason. I am asking what is broken but also am I going about it in the right way generally? Thank you.

    Read the article

  • Android - OPENGL cube is NOT in the display

    - by Marc Ortiz
    I'm trying to display a square on my display and i can't. Whats my problem? How can I display it on the screen (center of the screen)? I let my code below! Here's my render class: public class GLRenderEx implements Renderer { private GLCube cube; Context c; GLCube quad; // ( NEW ) // Constructor public GLRenderEx(Context context) { // Set up the data-array buffers for these shapes ( NEW ) quad = new GLCube(); // ( NEW ) } // Call back when the surface is first created or re-created. @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { // NO CHANGE - SKIP } // Call back after onSurfaceCreated() or whenever the window's size changes. @Override public void onSurfaceChanged(GL10 gl, int width, int height) { // NO CHANGE - SKIP } // Call back to draw the current frame. @Override public void onDrawFrame(GL10 gl) { // Clear color and depth buffers using clear-values set earlier gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset model-view matrix ( NEW ) gl.glTranslatef(-1.5f, 0.0f, -6.0f); // Translate left and into the // screen ( NEW ) // Translate right, relative to the previous translation ( NEW ) gl.glTranslatef(3.0f, 0.0f, 0.0f); quad.draw(gl); // Draw quad ( NEW ) } } And here is my square class: public class GLCube { private FloatBuffer vertexBuffer; // Buffer for vertex-array private float[] vertices = { // Vertices for the square -1.0f, -1.0f, 0.0f, // 0. left-bottom 1.0f, -1.0f, 0.0f, // 1. right-bottom -1.0f, 1.0f, 0.0f, // 2. left-top 1.0f, 1.0f, 0.0f // 3. right-top }; // Constructor - Setup the vertex buffer public GLCube() { // Setup vertex array buffer. Vertices in float. A float has 4 bytes ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); // Use native byte order vertexBuffer = vbb.asFloatBuffer(); // Convert from byte to float vertexBuffer.put(vertices); // Copy data into buffer vertexBuffer.position(0); // Rewind } // Render the shape public void draw(GL10 gl) { // Enable vertex-array and define its buffer gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); // Draw the primitives from the vertex-array directly gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } Thanks!!

    Read the article

  • OpenGL: small black pixel on top right corner of texture

    - by user308226
    I wrote an uncompressed TGA texture loader and it works nearly perfect, except for the fact that there's just one TINY little black patch on the upper right and it's driving me mad. I can get rid of it by using a texture border, but somehow I think that's not the practical solution. Has anyone encountered this kind of problem before and knows -generally- what's going wrong when something like this happens, or should I post the image-loading function code? Here's a picture, the little black dot is REALLY small. http://img651.imageshack.us/img651/2230/dasdwx.png

    Read the article

  • textures and vertex arrays with OpenGL?

    - by user146780
    Basically what I'd like to do is make textured NGONS. I also want to use a tesselator (GLU) to make concave and multicontour objects. I was wondering how the texture comes into play though. I think that the tesselator will return verticies so I will add these to my array, that's fine. But my vertex array will contain more than one polygon object so then how can I tell it when to bind the texture like in immediate mode? Right now I feel stuck with one call to bind. How can this be done? Thanks

    Read the article

  • iPhone NSTimer OpenGL problem

    - by Toby Wilson
    I've got a problem that only seems to occur on the device, not in the simulator. My app's animation is started and stopped using these methods: NSTimer* animationTimer; -(void)startAnimation { if(animationTimer = nil) animationTimer = [NSTimer scheduledTimerWithTimeInterval:1.0f/60.0f target:self selector:@selector(drawView) userInfo:nil repeats:YES]; } -(void)stopAnimation { [animationTimer invalidate]; animationTimer = nil; } In the simulator this works fine and drawView starts being called at 60fps. On the device (testing on iPod Touch), the scheduleTimerWithTimeInterval method doesn't seem to work. Furthermore, [animationTimer invalidate] causes EXC_BAD_ACCESS. I've spotted an obvious but minor flaw; adding if(animationTimer != nil) to the stopAnimation method will prevent the crash, but doesn't solve the problem of the animation timer not being properly initialised. Edit: The above doesn't prevent a crash. animationTimer != nil yet calling invalidate causes EXC_BAD_ACCESS. Should also add, this problem doesn't occur all the time on the device. Maybe 40% of the time.

    Read the article

  • OpenGL: Disable texture colors?

    - by Newbie
    Is it possible to disable texture colors, and use only white as the color? It would still read the texture, so i cant use glDisable(GL_TEXTURE_2D) because i want to render the alpha channels too. All i can think of now is to make new texture where all color data is white, remaining alpha as it is. I need to do this without shaders, so is this even possible?

    Read the article

  • Opengl: fit a quad to screen, given the value of Z.

    - by hytparadisee
    Short Version of the question: I will put a quad. I know the width and height of the screen in window coordinates, i know the Z-coordinates of the quad in 3D. I know the FOVY, I know the aspect. The quad will be put along Z-axis, My camera doesn't move (placed at 0, 0, 0). I want to find out the width and height of the quad IN 3D COORDINATES that will fit exactly onto my screen. Long Version of the question: I would like to put a quad along the Z-axis at the specified offset Z, I would like to find out the width and height of the quad that will exactly fill the entire screen. I used to have a post on gamedev.net that uses a formula similar to the following: *dist = Z * tan ( FOV / 2 )* Now I can never find the post! Though it's similar, it is still different, because I remembered in that working formula, they do make use of screenWidth and screenHeight, which are the width and height of the screen in window coordinates. I am not really familiar with concepts like frustum, fov and aspect so that's why I can't work out the formula on my own. Besides, I am sure I don't need gluUnproject (I tried it, but the results are way off). It's not some gl calls, it's just a math formula that can find out the width and height in 3D space that can fill the entire screen, IF Z offset, width in window coordinates, and height in window coordinates, are known. Thanks all in advance.

    Read the article

  • Using OpenGL vertex buffers in C++.

    - by Ren
    I've loaded a Wavefront .obj file and drawn it in immediate mode, and it works fine. I'm now trying to draw the same model with a vertex buffer, but I have a question. My model data is organized in the following structures: struct Vert { double x; double y; double z; }; struct Norm { double x; double y; double z; }; struct Texcoord { double u; double v; double w; }; struct Face { unsigned int v[3]; unsigned int n[3]; unsigned int t[3]; }; struct Model { unsigned int vertNumber; unsigned int normNumber; unsigned int texcoordNumber; unsigned int faceNumber; Vert * vertArray; Norm * normArray; Texcoord * texcoordArray; Face * faceArray; }; As it is now, I don't think there is any redundant data, since multiple face structures can point to the same vertex, normal, or texture coordinate. When I make vbo's for the vertex positions, normals, and texture coordinates, and assign data to them with glBufferData, do I have to have make arrays with redundant data so that they will all have the same number of elements in the same order? I'd like to know if there is a simpler way to fill the buffers with the way I already have the model's data organized.

    Read the article

  • OpenGL Nothing will Display

    - by m00st
    Why can't I get anything to display with this code? #include <iostream> #include "GL/glfw.h" #ifndef MAIN #define MAIN #include "GL/gl.h" #include "GL/glu.h" #endif using namespace std; void display(); int main() { int running = GL_TRUE; glfwInit(); if( !glfwOpenWindow( 640,480, 0,0,0,0,0,0, GLFW_WINDOW ) ) { glfwTerminate(); return 0; } while( running ) { //GL Code here display(); glfwSwapBuffers(); // Check if ESC key was pressed or window was closed running = !glfwGetKey( GLFW_KEY_ESC ) && glfwGetWindowParam( GLFW_OPENED ); } glfwTerminate(); return 0; } void display() { glClearColor(0, 0,0, 0.0f); glClear(GL_COLOR_BUFFER_BIT); glLoadIdentity(); gluLookAt(0, 0, 5, 0.0, 0.0, 0.0, 0, 1, 0); glScalef(1.0f, 1.0f, 1.0f); glTranslatef(0, 0, -2); glBegin(GL_POLYGON); glColor3f(1.0, 0.2, 0.2); glVertex3f(0.25, 0.25, 0.0); glVertex3f(0.75, 0.25, 0.0); glVertex3f(0.75, 0.75, 0.0); glVertex3f(0.25, 0.75, 0.0); glEnd(); glFlush(); }

    Read the article

  • opengl + glew in Eclipse (for windows)

    - by echo
    I'm trying to get glew to work under eclipse (mingw) in windows. Seems as if it is extremely unusual not to use Visual Studio in this context. The install instructions for glew is simply "use the project file in build/vc6/"... The glew readme also writes: "If you wish to build GLEW from scratch (update the extension data from the net or add your own extension information), you need a Unix environment (including wget, perl, and GNU make). The extension data is regenerated from the top level source directory with: make extensions" In order to get glew to work in eclipse and windows I have to compile it in a unix environment? Is there no other way? Sure, it would probably be a learning experience to pull that off (if I were to succeed) but I feel that my time is best spent actually working on my project. And even if I did manage to crosscompile everything, would it work in anything but Visual Studio? Is the whole thing unfeasible and the best solution is to install Visual Studio? Google haven't been of much help, I feel like I am the only one that has ever attempted to do this (is there a good reason this?).

    Read the article

  • SetPixelFormat is not creating an alpha channel for OpenGL

    - by i_photon
    I've been able to do this before, and I don't know what changed between two weeks ago and the last windows update, but for some reason SetPixelFormat isn't creating an alpha channel. gDebugger shows that the window's back buffer only has 3 channels. White+0 alpha renders as white. So there is something inherently wrong with what I was doing, or an update broke it. The code below should be paste-able into an empty VS project. #include <Windows.h> #include <dwmapi.h> #include <gl/GL.h> #pragma comment(lib,"opengl32.lib") #pragma comment(lib,"dwmapi.lib") HWND hWnd = 0; HDC hDC = 0; HGLRC hRC = 0; LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM); int APIENTRY WinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow ) { WNDCLASSEX wcex = {0}; wcex.cbSize = sizeof(WNDCLASSEX); wcex.lpfnWndProc = WndProc; wcex.hCursor = LoadCursor(NULL, IDC_ARROW); wcex.lpszClassName = TEXT("why_class"); RegisterClassEx(&wcex); // no errors hWnd = CreateWindowEx( NULL, TEXT("why_class"), TEXT("why_window"), WS_OVERLAPPEDWINDOW, 128,128, 256,256, NULL, NULL, hInstance, NULL ); // no errors PIXELFORMATDESCRIPTOR pfd = {0}; pfd.nSize = sizeof(pfd); pfd.nVersion = 1; pfd.dwFlags = PFD_DRAW_TO_WINDOW| PFD_SUPPORT_OPENGL| PFD_DOUBLEBUFFER| PFD_SUPPORT_COMPOSITION; pfd.cColorBits = 32; pfd.cAlphaBits = 8; // need an alpha channel pfd.cDepthBits = 24; pfd.cStencilBits = 8; hDC = GetDC(hWnd); int i = ChoosePixelFormat(hDC,&pfd); SetPixelFormat(hDC,i,&pfd); // no errors hRC = wglCreateContext(hDC); // no errors wglMakeCurrent(hDC,hRC); // no errors // EDIT: Turn on alpha testing (which actually won't // fix the clear color problem below) glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); // EDIT: Regardless of whether or not GL_BLEND is enabled, // a clear color with an alpha of 0 should (or did at one time) // make this window transparent glClearColor( 0,0,0, // if this is (1,1,1), the window renders // solid white regardless of the alpha 0 // changing the alpha here has some effect ); DWM_BLURBEHIND bb = {0}; bb.dwFlags = DWM_BB_ENABLE|DWM_BB_TRANSITIONONMAXIMIZED; bb.fEnable = TRUE; bb.fTransitionOnMaximized = TRUE; DwmEnableBlurBehindWindow(hWnd,&bb); // no errors ShowWindow(hWnd, SW_SHOWNORMAL); UpdateWindow(hWnd); // no errors MSG msg = {0}; while(true){ GetMessage(&msg,NULL,NULL,NULL); if(msg.message == WM_QUIT){ return (int)msg.wParam; } TranslateMessage(&msg); DispatchMessage(&msg); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glBegin(GL_TRIANGLES); // this vertex should be transparent, // as it was when I last built this test // // it renders as white glColor4f(1,1,1,0); glVertex2f(0,0); glColor4f(0,1,1,1); glVertex2f(1,0); glColor4f(1,0,1,1); glVertex2f(0,1); glEnd(); SwapBuffers(hDC); } return (int)msg.wParam; } LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam) { switch (message) { case WM_DESTROY: { PostQuitMessage(0); }return 0; default: break; } return DefWindowProc(hWnd, message, wParam, lParam); }

    Read the article

  • Rotating in OpenGL relative to the viewport

    - by Nick
    I'm trying to display an object in the view which can be rotated naturally by dragging the cursor/touchscreen. At the moment I've got X and Y rotation of an object like this glRotatef(rotateX, 0f, 1f, 0f); // Dragging along X, so spin around Y axis glRotatef(rotateY, 1f, 0f, 0f); I understand why this doesn't do what I want it to do (e.g. if you spin it right 180 degrees, up and down spinning gets reversed). I just can't figure out a way for both directions to stay left-right and up-down relative to the viewer. I can assume that the camera is fixed and looking along the Z axis. Any ideas?

    Read the article

  • OpenGL circle rotation

    - by user350632
    I'm using following code to draw my circles: double theta = 2 * 3.1415926 / num_segments; double c = Math.Cos(theta);//precalculate the sine and cosine double s = Math.Sin(theta); double t; double x = r;//we start at angle = 0 double y = 0; GL.glBegin(GL.GL_LINE_LOOP); for(int ii = 0; ii < num_segments; ii++) { float first = (float)(x * scaleX + cx) / xyFactor; float second = (float)(y * scaleY + cy) / xyFactor; GL.glVertex2f(first, second); // output Vertex //apply the rotation matrix t = x; x = c * x - s * y; y = s * t + c * y; } GL.glEnd(); The problem is that when scaleX is different from scaleY then circles are transformed in the right way except for the rotation. In my code sequence looks like this: circle.Scale(tmp_p.scaleX, tmp_p.scaleY); circle.Rotate(tmp_p.rotateAngle); My question is what other calculations should i perform for circle to rotate properly when scaleX and scaleY are not equal?

    Read the article

  • OpenGL Shader Compile Error

    - by Tomas Cokis
    I'm having a bit of a problem with my code for compiling shaders, namely they both register as failed compiles and no log is received. This is the shader compiling code: /* Make the shader */ Uint size; GLchar* file; loadFileRaw(filePath, file, &size); const char * pFile = file; const GLint pSize = size; newCashe.shader = glCreateShader(shaderType); glShaderSource(newCashe.shader, 1, &pFile, &pSize); glCompileShader(newCashe.shader); GLint shaderCompiled; glGetShaderiv(newCashe.shader, GL_COMPILE_STATUS, &shaderCompiled); if(shaderCompiled == GL_FALSE) { ReportFiler->makeReport("ShaderCasher.cpp", "loadShader()", "Shader did not compile", "The shader " + filePath + " failed to compile, reporting the error - " + OpenGLServices::getShaderLog(newCashe.shader)); } And these are the support functions: bool loadFileRaw(string fileName, char* data, Uint* size) { if (fileName != "") { FILE *file = fopen(fileName.c_str(), "rt"); if (file != NULL) { fseek(file, 0, SEEK_END); *size = ftell(file); rewind(file); if (*size > 0) { data = (char*)malloc(sizeof(char) * (*size + 1)); *size = fread(data, sizeof(char), *size, file); data[*size] = '\0'; } fclose(file); } } return data; } string OpenGLServices::getShaderLog(GLuint obj) { int infologLength = 0; int charsWritten = 0; char *infoLog; glGetShaderiv(obj, GL_INFO_LOG_LENGTH,&infologLength); if (infologLength > 0) { infoLog = (char *)malloc(infologLength); glGetShaderInfoLog(obj, infologLength, &charsWritten, infoLog); string log = infoLog; free(infoLog); return log; } return "<Blank Log>"; } and the shaders I'm loading: void main(void) { gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); } void main(void) { gl_Position = ftransform(); } In short I get From: ShaderCasher.cpp, In: loadShader(), Subject: Shader did not compile Message: The shader Data/Shaders/Standard/standard.vs failed to compile, reporting the error - <Blank Log> for every shader I compile I've tried replacing the file reading with just a hard coded string but I get the same error so there must be something wrong with how I'm compiling them. I have run and compiled example programs with shaders, so I doubt my drivers are the issue, but in any case I'm on a Nvidia 8600m GT. Can anyone help?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >