Search Results

Search found 2513 results on 101 pages for 'opengl'.

Page 31/101 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • OpenGL - Frustum not culling polygons beyond far plane

    - by Pladnius Brooks
    I have implemented frustum culling and am checking the bounding box for its intersection with the frustum planes. I added the ability to pause frustum updates which lets me see if the frustum culling has been working correctly. When I turn around after I have paused it, nothing renders behind me and to the left and right side, they taper off as well just as you would expect. Beyond the clip distance (far plane), they still render and I am not sure whether it is a problem with my frustum updating or bounding box checking code or I am using the wrong matrix or what. As I put the distance in the projection matrix at 3000.0f, it still says that bounding boxes well past that are still in the frustum, which isn't the case. Here is where I create my modelview matrix: projectionMatrix = glm::perspective(newFOV, 4.0f / 3.0f, 0.1f, 3000.0f); viewMatrix = glm::mat4(1.0); viewMatrix = glm::scale(viewMatrix, glm::vec3(1.0, 1.0, -1.0)); viewMatrix = glm::rotate(viewMatrix, anglePitch, glm::vec3(1.0, 0.0, 0.0)); viewMatrix = glm::rotate(viewMatrix, angleYaw, glm::vec3(0.0, 1.0, 0.0)); viewMatrix = glm::translate(viewMatrix, glm::vec3(-x, -y, -z)); modelViewProjectiomMatrix = projectionMatrix * viewMatrix; The reason I scale it by -1 in the Z direction is because the levels were designed to be rendered with DirectX so I reverse the Z direction. Here is where I update my frustum: void CFrustum::calculateFrustum() { glm::mat4 mat = camera.getModelViewProjectionMatrix(); // Calculate the LEFT side m_Frustum[LEFT][A] = (mat[0][3]) + (mat[0][0]); m_Frustum[LEFT][B] = (mat[1][3]) + (mat[1][0]); m_Frustum[LEFT][C] = (mat[2][3]) + (mat[2][0]); m_Frustum[LEFT][D] = (mat[3][3]) + (mat[3][0]); // Calculate the RIGHT side m_Frustum[RIGHT][A] = (mat[0][3]) - (mat[0][0]); m_Frustum[RIGHT][B] = (mat[1][3]) - (mat[1][0]); m_Frustum[RIGHT][C] = (mat[2][3]) - (mat[2][0]); m_Frustum[RIGHT][D] = (mat[3][3]) - (mat[3][0]); // Calculate the TOP side m_Frustum[TOP][A] = (mat[0][3]) - (mat[0][1]); m_Frustum[TOP][B] = (mat[1][3]) - (mat[1][1]); m_Frustum[TOP][C] = (mat[2][3]) - (mat[2][1]); m_Frustum[TOP][D] = (mat[3][3]) - (mat[3][1]); // Calculate the BOTTOM side m_Frustum[BOTTOM][A] = (mat[0][3]) + (mat[0][1]); m_Frustum[BOTTOM][B] = (mat[1][3]) + (mat[1][1]); m_Frustum[BOTTOM][C] = (mat[2][3]) + (mat[2][1]); m_Frustum[BOTTOM][D] = (mat[3][3]) + (mat[3][1]); // Calculate the FRONT side m_Frustum[FRONT][A] = (mat[0][3]) + (mat[0][2]); m_Frustum[FRONT][B] = (mat[1][3]) + (mat[1][2]); m_Frustum[FRONT][C] = (mat[2][3]) + (mat[2][2]); m_Frustum[FRONT][D] = (mat[3][3]) + (mat[3][2]); // Calculate the BACK side m_Frustum[BACK][A] = (mat[0][3]) - (mat[0][2]); m_Frustum[BACK][B] = (mat[1][3]) - (mat[1][2]); m_Frustum[BACK][C] = (mat[2][3]) - (mat[2][2]); m_Frustum[BACK][D] = (mat[3][3]) - (mat[3][2]); // Normalize all the sides NormalizePlane(m_Frustum, LEFT); NormalizePlane(m_Frustum, RIGHT); NormalizePlane(m_Frustum, TOP); NormalizePlane(m_Frustum, BOTTOM); NormalizePlane(m_Frustum, FRONT); NormalizePlane(m_Frustum, BACK); } And finally, where I check the bounding box: bool CFrustum::BoxInFrustum( float x, float y, float z, float x2, float y2, float z2) { // Go through all of the corners of the box and check then again each plane // in the frustum. If all of them are behind one of the planes, then it most // like is not in the frustum. for(int i = 0; i < 6; i++ ) { if(m_Frustum[i][A] * x + m_Frustum[i][B] * y + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue; if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue; if(m_Frustum[i][A] * x + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue; if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue; if(m_Frustum[i][A] * x + m_Frustum[i][B] * y + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue; if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue; if(m_Frustum[i][A] * x + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue; if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue; // If we get here, it isn't in the frustum return false; } // Return a true for the box being inside of the frustum return true; }

    Read the article

  • Opengl-es draw an .obj file, but how?

    - by lacas
    I d like to parse an .obj file. My parser is working good, but my displaying is not good. Obj file is here my code is: public ObjModelParser parse() { long startTime = System.currentTimeMillis(); InputStream fileIn = resources.openRawResource(resourceID); BufferedReader buffer = new BufferedReader(new InputStreamReader(fileIn)); String line=""; Log.e("model loader", "Start parsing object " + resourceID); try { while ((line = buffer.readLine()) != null) { StringTokenizer parts = new StringTokenizer(line, " "); int numTokens = parts.countTokens(); if (numTokens == 0) continue; String part = parts.nextToken(); if (part.equals(VERTEX)) { Log.e("v ", line); vertices.add(Float.parseFloat(parts.nextToken())); vertices.add(Float.parseFloat(parts.nextToken())); vertices.add(Float.parseFloat(parts.nextToken())); .... and my displaying code is: draw that model with TRIANGLE_STRIP and gl.glDrawArrays(rendermode, 0, coords.length/dimension); What is the mistake here? edited: file here to show what is my good coords from my program for a cube, and what is from .obj file, that never show Thanks, Leslie

    Read the article

  • OpenGL - drawing 2D polygons shapes with texture

    - by plonkplonk
    I am trying to make a few effects in a C+GL game. So far I draw all my sprites as a quad, and it works. However, I am trying to make a large ring appear at times, with a texture following that ring, as it takes less memory than a quad with the ring texture inside. The type of ring I want to make is not a round-shaped GL mesh ring (the "tube" type) but a "paper" 2D ring. That way I can modify the "width" of the ring, getting more of the effect than a simple quad+ring texture. So far all my attempts have been...kind of ridiculous, as I don't understand GL's coordinates too well (and I can't really understand the available documentation...I am just a designer with no coder help or background. A n00b, basically). glBegin(GL_POLYGON); for(i = 0;i < 360; i += 10){ glTexCoord2f(0, 0); glVertex2f(Cos(i)*(H-10),Sin(i)H); glTexCoord2f(0, HP); glVertex2f(Sin(i)(H-10),Cos(i)*(H-10)); glTexCoord2f(WP, HP); glVertex2f(Cos(i)H,Sin(i)(H-10)); glTexCoord2f(WP, 0); glVertex2f(Sin(i)*H,Cos(i)*H); } glEnd(); This is my last attempt, and it seems to generate a "sunburst" from the right edge of the circle instead of a ring. It's an amusing effect but definitely not what I want. Other results included the circle looking exactly the same as the quad textured (aka drawing a sprite literally) or something that looked like a pop-art filter, by working on this train of thought. Seems like my logic here is entirely flawed, so, what would be the easiest way to obtain such a ring? No need to reply in code, just some guidance for a non-math-skilled user...

    Read the article

  • OpenGL Shading Language backwards compatibility

    - by Luca
    I've noticed that my GLSL shaders are not compilable when the GLSL version is lower than 130. What are the most critical elements for having a backward compatible shader source? I don't want to have a full backward compatibility, but I'd like to understand the main guidelines for having simple (forward compatible) shaders running on GPU with GLSL lower than 130. Of course the problem could be solved with the preprocessor #if __VERSION__ < 130 #define VERTEX_IN attribute #else #define VERTER_IN in #endif But there probably many issues that I ignore. Thank you

    Read the article

  • OpenGL/Carbon/Cocoa Memory Management Autorelease issue

    - by Stephen Furlani
    Hoooboy, I've got another doozy of a memory problem. I'm creating a Carbon (AGL) Window, in C++ and it's telling me that I'm autorelease-ing it without a pool in place. uh... what? I thought Carbon existed outside of the NSAutoreleasePool... When I call glEnable(GL_TEXTURE_2D) to do some stuff, it gives me a EXC_BAD_ACCESS warning - but if the AGL Window is never getting release'd, then shouldn't it exist? Setting set objc-non-blocking-mode at (gdb) doesn't make the problem go away. So I guess my question is WHAT IS UP WITH CARBON/COCOA/NSAutoreleasePool? And... are there any resources for Objective-C++? Because crap like this keeps happening to me. Thanks, -Stephen --- CODE --- Test Draw Function void Channel::frameDraw( const uint32_t frameID) { eq::Channel::frameDraw( frameID ); getWindow()->makeCurrent(false); glEnable(GL_TEXTURE_2D); // Throws Error Here } Make Current (Equalizer API from Eyescale) void Window::makeCurrent( const bool useCache ) const { if( useCache && getPipe()->isCurrent( this )) return; _osWindow->makeCurrent(); } void AGLWindow::makeCurrent() const { aglSetCurrentContext( _aglContext ); AGLWindowIF::makeCurrent(); if( _aglContext ) { EQ_GL_ERROR( "After aglSetCurrentContext" ); } } _aglContext is a valid memory location (i.e. not NULL) when I step through. -S!

    Read the article

  • opengl invert framebuffer pixels

    - by ToxIk
    I was wondering was the best way to invert the color pixels in the frame buffer is. I know it's possible to do with glReadPixels() and glDrawPixels() but the performance hit of those calls is pretty big. Basically, what I'm trying to do is have an inverted color cross-hair which is always visible no matter what's behind it. For instance, I'd have an arbitrary alpha mask bitmap or texture, have it render without depth test after the scene is drawn, and all the frame buffer pixels under the masked (full alpha) pixels of the texture would be inverted. I've been trying to do this with a texture, but I'm getting some strange results, also all the blending options I still find confusing.

    Read the article

  • In OpenGL vertex shader, gl_Position doesn't get homogenized..

    - by KJ
    Hi everyone, I was expecting gl_Position to automatically get homogenized (divided by w), but it seems not working.. Why do the followings make different results? 1) void main() { vec4 p; ... omitted ... gl_Position = projectionMatrix * p; } 2) ... same as above ... p = projectionMatrix * p; gl_Position = p / p.w; I think the two are supposed to generate the same results, but it seems it's not the case. 1 doesn't work while 2 is working as expected.. Could it possibly be a precision problem? Am I missing something? This is driving me almost crazy.. helps needed. Many thanks in advance!

    Read the article

  • Polygonal gradients with OpenGL

    - by user146780
    I'm wondering how I could create a gradient wuth multiple stops and a direction if I'm making polygons. Right now I'm creating gradients by changing the color of the verticies but this is limiting. Is there another way to do this? Thanks

    Read the article

  • Use only alpha channel of texture in OpenGL?

    - by Chris
    Hey, I'm trying to draw a constant color to the framebuffer and blend it using the alpha channel from an RGBA texture. I've been looking at glBlendFunc and glBlendColor, but can't seem to figure out a way to ignore the RGB values from the texture. I'm thinking I'll have to pull out the alpha values myself and make a second texture with GL_ALPHA. Is there a better way to do this? Thanks!

    Read the article

  • Problem cube mapping in OpenGL using DDS compressed images

    - by Paul Jones
    Hi All, I am having trouble cube mapping when using a DDS cube map, I'm just getting a black cube which leads me to believe I have missing something simple, here's the code so far: DDS_IMAGE_DATA *pDDSImageData = LoadDDSFile(filename); //compressedTexture = -1; if(pDDSImageData != NULL) { int height = pDDSImageData->height; int width = pDDSImageData->width; int numMipMaps = pDDSImageData->numMipMaps; int blockSize; GLenum cubefaces[6] = { GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, }; if( pDDSImageData->format == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT ) blockSize = 8; else blockSize = 16; glGenTextures( 1, &textureId ); int nSize; int nOffset = 0; glEnable(GL_TEXTURE_CUBE_MAP); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for(int face = 0; face < 6; face++) { for( int i = 0; i < numMipMaps; i++ ) { if( width == 0 ) width = 1; if( height == 0 ) height = 1; glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_GENERATE_MIPMAP, GL_TRUE); nSize = ((width+3)>>2) * ((height+3)>>2) * blockSize; glCompressedTexImage2D(cubefaces[face] , i, pDDSImageData->format, width, height, 0, nSize, pDDSImageData->pixels + nOffset ); nOffset += nSize; // Half the image size for the next mip-map level... width = (width / 2); height = (height / 2); } } } Once this code is called I bind the texture using glBindTexture and draw a cube using GL_QUADS and glTexCoord3f. Thank you for reading and all comments are welcome, sorry if any of the formatting comes out wrong.

    Read the article

  • Why use multiple OpenGL context

    - by Luca
    For rendering I have a current GL context, associated to a window. In the case the application render multiple scenes (for example using accumulation or different viewports) I think it is ok to reuse the same context. My question, indeed, is: why should I use multiple GL context? I red on ARB_framebuffer_object extension spec that MakeCurrent call could be expansive, and in the case the ARB_framebuffer_object extension is present I can render on a generic buffer without using MakeCurrent. Apparently the only reason to use multiple GL context is to avoid to setup context state (pixel store, transfer, point size, polygon stipple...) or to have avaialable multiple render buffers configuration (one context with accumulation, another without). How to determine when is better an alternative context instead of setting context state? Thankyou all!

    Read the article

  • Using OpenGL Mathematics (GLM) in an Objective-C program

    - by user1621592
    i am trying to use GLM to load a .obj object in my Objective-C Program (Xcode 4.4 Mac Os X). I have added the glm folder to my project. i try to import it using #import "glm/glm.hpp", but the program doesn't build. some of the errors are the following: (this errors are produced in the GLM files) namespace glm{ //Unknown type name 'namespace' namespace detail { ..... it doesn't find the cstdlib, cmath, and other libraries.... This happens because my program is in Objective-c and the GLM doesn't work with this language??? How can i resolve this problem??? Thanks for your help.

    Read the article

  • How to write to the OpenGL Depth Buffer

    - by Mikepote
    I'm trying to implement an old-school technique where a rendered background image AND preset depth information is used to occlude other objects in the scene. So for instance if you have a picture of a room with some wires hanging from the ceiling in the foreground, these are given a shallow depth value in the depthmap, and when rendered correctly, allows the character to walk "behind" the wires but in front of other objects in the room. So far I've tried creating a depth texture using: glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, Image.GetWidth(), Image.GetHeight(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, pixels); Then just binding it to a quad and rendering that over the screen, but it doesnt write the depth values from the texture. I've also tried: glDrawPixels(Image.GetWidth(), Image.GetHeight(), GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, pixels); But this slows down my framerate to about 0.25 fps... I know that you can do this in a pixelshader by setting the gl_fragDepth to a value from the texture, but I wanted to know if I could achieve this with non-pixelshader enabled hardware?

    Read the article

  • texture on cube-side with opengl

    - by Tyzak
    hello i want to use a texture on a cube (created by glutsolidcube()), how can i define where the texture is pictured at? (for example on the "frontside" of a cube) glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, filterMode); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, filterMode); glColor4f(0.8,0.7,0.11,1.0); glPushMatrix(); glScalef(4, 1.2, 1.5); glTranslatef( 0, 0.025, 0); glutSolidCube(0.1); glPopMatrix(); glDisable(GL_TEXTURE_2D); thanks

    Read the article

  • problems with openGl on eclipse

    - by lego69
    I'm working on Windows XP I have portable version of Eclipse Galileo, but I didn't find there glut so I decided to add it using this link I made all steps and and now I'm trying to compile this code #include "GL/glut.h" #include "GL/gl.h" #include "GL/glu.h" /////////////////////////////////////////////////////////// // Called to draw scene void RenderScene(void) { // Clear the window with current clearing color glClear(GL_COLOR_BUFFER_BIT); // Flush drawing commands glFlush(); } /////////////////////////////////////////////////////////// // Setup the rendering state void SetupRC(void) { glClearColor(0.0f, 0.0f, 1.0f, 1.0f); } /////////////////////////////////////////////////////////// // Main program entry point void main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(800,600); glutCreateWindow("Simple"); glutDisplayFunc(RenderScene); SetupRC(); glutMainLoop(); } and I have this errors Simple.o: In function `RenderScene': C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:16: undefined reference to `_imp__glClear' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:20: undefined reference to `_imp__glFlush' Simple.o: In function `SetupRC': C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:27: undefined reference to `_imp__glClearColor' Simple.o: In function `main': C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:34: undefined reference to `glutInit' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:35: undefined reference to `glutInitDisplayMode' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:36: undefined reference to `glutInitWindowSize' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:37: undefined reference to `glutCreateWindow' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:38: undefined reference to `glutDisplayFunc' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:42: undefined reference to `glutMainLoop' collect2: ld returned 1 exit status please can somebody help me, thanks in advance

    Read the article

  • Accuracy of OpenGL ES Instrument

    - by Rob Jones
    I'm developing a game for the iPhone. I've decided that 30FPS is plenty so I've written some code that only allows the App to present the render buffer every 1/30 of a second. When I tried to verify this with Instruments I got varying information. On an iPod Touch (2009 edition, 32G) it reports 30 FPS for Core Animation Frames Per Second. On an iPhone 3G I get wildly varying results. And not just less than 30 FPS. I see 30 FPS on a regular basis. It actually seems to hang closer to 36-39. To investigate this anomaly I added my own FPS to the app and update it once per second. I stays right at 29 FPS on both devices. So, does anyone have any suggestions as to what might be going on? I expect Instruments to be accurate so it really concerns me that it appears inaccurate. It makes me think I have a bug somewhere, but I sure can't find it.

    Read the article

  • Using the FreeType lib to create text bitmaps to draw in OpenGL 3.x

    - by Andy
    At the moment I not too sure where my problem is. I can draw loaded images as textures no problem, however when I try to generate a bitmap with a char on it I just get a black box. I am confident that the problem is when I generate and upload the texture. Here is the method for that; the top section of the if statement just draws an texture of a image loaded from file (res/texture.jpg) and that draws perfectly. And the else part of the if statement will try to generate and upload a texture with the char (variable char enter) on. Source Code, I will add shaders and more of the C++ if needed but they work fine for the image. void uploadTexture() { if(enter=='/'){ // Draw the image. GLenum imageFormat; glimg::SingleImage image = glimg::loaders::stb::LoadFromFile("res/texture.jpg")->GetImage(0,0,0); glimg::OpenGLPixelTransferParams params = glimg::GetUploadFormatType(image.GetFormat(), 0); imageFormat = glimg::GetInternalFormat(image.GetFormat(),0); glGenTextures(1,&textureBufferObject); glBindTexture(GL_TEXTURE_2D, textureBufferObject); glimg::Dimensions dimensions = image.GetDimensions(); cout << "Texture dimensions w "<< dimensions.width << endl; glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, dimensions.width, dimensions.height, 0, params.format, params.type, image.GetImageData()); }else { // Draw the char useing the FreeType Lib FT_Init_FreeType(&ft); FT_New_Face(ft, "arial.ttf", 0, &face); FT_Set_Pixel_Sizes(face, 0, 48); FT_GlyphSlot g = face->glyph; glGenTextures(1,&textureBufferObject); glBindTexture(GL_TEXTURE_2D, textureBufferObject); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); FT_Load_Char(face, enter, FT_LOAD_RENDER); FT_Bitmap theBitmap = g->bitmap; int BitmapWidth = g->bitmap.width; int BitmapHeight = g->bitmap.rows; cout << "draw char - " << enter << endl; cout << "g->bitmap.width - " << g->bitmap.width << endl; cout << "g->bitmap.rows - " << g->bitmap.rows << endl; int TextureWidth =roundUpToNextPowerOfTwo(g->bitmap.width); int TextureHeight =roundUpToNextPowerOfTwo(g->bitmap.rows); cout << "texture width x height - " << TextureWidth <<" x " << TextureHeight << endl; GLubyte* TextureBuffer = new GLubyte[ TextureWidth * TextureWidth ]; for(int j = 0; j < TextureHeight; ++j) { for(int i = 0; i < TextureWidth; ++i) { TextureBuffer[ j*TextureWidth + i ] = (j >= BitmapHeight || i >= BitmapWidth ? 0 : g->bitmap.buffer[ j*BitmapWidth + i ]); } } glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, TextureWidth, TextureHeight, 0, GL_RGB8, GL_UNSIGNED_BYTE, TextureBuffer); } }

    Read the article

  • OpenGL Video RAM Limits

    - by Tamir
    I have been trying to make a Cross-platform 2D Online Game, and my maps are made of tiles. My tileset, which I render the tiles from, is quite huge. I wanted to know how can I disable hardware rendering, or at least making it more capable. Hence, I wanted to know what are the basic limits of the video ram, as far as I know, Direct3D has a texture size limits (by that I don't mean the power-of-two texture sizes).

    Read the article

  • Help understanding some OpenGL stuff

    - by shinjuo
    I am working with some code to create a triangle that moves with arrow keys. I want to create a second object that moves independently. This is where I am having trouble, I have created the second actor, but cannot get it to move. There is too much code to post it all so I will just post a little and see if anyone can help at all. ogl_test.cpp #include "platform.h" #include "srt/scheduler.h" #include "model.h" #include "controller.h" #include "model_module.h" #include "graphics_module.h" class blob : public actor { public: blob(float x, float y) : actor(math::vector2f(x, y)) { } void render() { transform(); glBegin(GL_TRIANGLES); glVertex3f(0.25f, 0.0f, -5.0f); glVertex3f(-.5f, 0.25f, -5.0f); glVertex3f(-.5f, -0.25f, -5.0f); glEnd(); end_transform(); } void update(controller& c, float dt) { if (c.left_key) { rho += pi / 9.0f * dt; c.left_key = false; } if (c.right_key) { rho -= pi / 9.0f * dt; c.right_key = false; } if (c.up_key) { v += .1f * dt; c.up_key = false; } if (c.down_key) { v -= .1f * dt; if (v < 0.0) { v = 0.0; } c.down_key = false; } actor::update(c, dt); } }; class enemyOne : public actor { public: enemyOne(float x, float y) : actor(math::vector2f(x, y)) { } void render() { transform(); glBegin(GL_TRIANGLES); glVertex3f(0.25f, 0.0f, -5.0f); glVertex3f(-.5f, 0.25f, -5.0f); glVertex3f(-.5f, -0.25f, -5.0f); glEnd(); end_transform(); } void update(controller& c, float dt) { if (c.left_key) { rho += pi / 9.0f * dt; c.left_key = false; } if (c.right_key) { rho -= pi / 9.0f * dt; c.right_key = false; } if (c.up_key) { v += .1f * dt; c.up_key = false; } if (c.down_key) { v -= .1f * dt; if (v < 0.0) { v = 0.0; } c.down_key = false; } actor::update(c, dt); } }; int APIENTRY WinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance, char* lpCmdLine, int nCmdShow ) { model m; controller control(m); srt::scheduler scheduler(33); srt::frame* model_frame = new srt::frame(scheduler.timer(), 0, 1, 2); srt::frame* render_frame = new srt::frame(scheduler.timer(), 1, 1, 2); model_frame->add(new model_module(m, control)); render_frame->add(new graphics_module(m)); scheduler.add(model_frame); scheduler.add(render_frame); blob* prime = new blob(0.0f, 0.0f); m.add(prime); m.set_prime(prime); enemyOne* primeTwo = new enemyOne(2.0f, 0.0f); m.add(primeTwo); m.set_prime(primeTwo); scheduler.start(); control.start(); return 0; } model.h #include <vector> #include "vec.h" const double pi = 3.14159265358979323; class controller; using math::vector2f; class actor { public: vector2f P; float theta; float v; float rho; actor(const vector2f& init_location) : P(init_location), rho(0.0), v(0.0), theta(0.0) { } virtual void render() = 0; virtual void update(controller&, float dt) { float v1 = v; float theta1 = theta + rho * dt; vector2f P1 = P + v1 * vector2f(cos(theta1), sin(theta1)); if (P1.x < -4.5f || P1.x > 4.5f) { P1.x = -P1.x; } if (P1.y < -4.5f || P1.y > 4.5f) { P1.y = -P1.y; } v = v1; theta = theta1; P = P1; } protected: void transform() { glPushMatrix(); glTranslatef(P.x, P.y, 0.0f); glRotatef(theta * 180.0f / pi, 0.0f, 0.0f, 1.0f); //Rotate about the z-axis } void end_transform() { glPopMatrix(); } }; class model { private: typedef std::vector<actor*> actor_vector; actor_vector actors; public: actor* _prime; model() { } void add(actor* a) { actors.push_back(a); } void set_prime(actor* a) { _prime = a; } void update(controller& control, float dt) { for (actor_vector::iterator i = actors.begin(); i != actors.end(); ++i) { (*i)->update(control, dt); } } void render() { for (actor_vector::iterator i = actors.begin(); i != actors.end(); ++i) { (*i)->render(); } } };

    Read the article

  • OpenGL/GLSL checking if shader compiled fine on intel cards

    - by clamp
    hello, i am using this code to check if my glsl shader compiled fine. glGetObjectParameterivARB(obj, GL_OBJECT_INFO_LOG_LENGTH_ARB, &infologLength); if (infologLength > 1) { int charsWritten = 0; char * const infoLog = new char[infologLength]; glGetInfoLogARB(obj, infologLength, &charsWritten, infoLog); tError(infoLog, false); delete infoLog; } } the length of the returned string is empty on nvidia and ATI cards, but on intel cards this one returns the string "no errors." now what is the best way to find out, if there are really no errors? should i just check for this string? or is there a convention what this function glGetInfoLogARB should return?

    Read the article

  • background colour in opengl

    - by lego69
    I want to change background color of the window after pressing the button, but my program doesn't work, can somebody tell me why, thanks in advance int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(800, 600); glutInitWindowPosition(300,50); glutCreateWindow("GLRect"); glClearColor(1.0f, 0.0f, 0.0f, 1.0f); <--- glutDisplayFunc(RenderScene); glutReshapeFunc(ChangeSize); glutMainLoop(); system("pause"); glClearColor(0.0f, 1.0f, 0.0f, 1.0f); <--- return 0; }

    Read the article

  • Problem displaying Vertex Buffer Object (OpenGL and Obj-C)

    - by seaworthy
    Hey, I am having a problem displaying or loading a buffer with an array of vertices. I know that array works fine because I am able to render it using a loop and a glVertex command. I can't figure out what's wrong. Your insight is highly appreciated. GLuint vboId; glGenBuffers( 1, &vboId ); glBindBuffer( GL_ARRAY_BUFFER, vboId); glBufferData( GL_ARRAY_BUFFER, count*sizeof( GLfloat ),array,GL_STATIC_DRAW_ARB ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); printf("%d\n",count); glEnableClientState( GL_VERTEX_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, vboId ); glVertexPointer( 3, GL_FLOAT, 0, 0 ); glDisableClientState( GL_VERTEX_ARRAY ); printf("vboId: [%hd]",vboId); glDeleteBuffers(1, &vboId); Help?

    Read the article

  • Texture allocations being doubled in iPhone OpenGL ES

    - by Kyle
    The below couple lines are called 15 times during initialization. The tx-size is reported at 512 everytime, so this will allocate a 1mb image in memory 15 times, for a total of 15mb used.. However, I noticed instruments is reporting a total of 31 allocations! (15*2)+1 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tx-size, tx-size, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); Likewise in another area of my program that allocates 6 256x256x4 (256kB) textures.. I see 13 sitting there. (6*2)+1 Anyone know what's going on here? It seems like awful memory management, and I really hope it's my fault. Just to let everyone know, I'm on the simulator.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >