Search Results

Search found 3627 results on 146 pages for 'opengl es 2 0'.

Page 56/146 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Could someone explain why my world reconstructed from depth position is incorrect?

    - by yuumei
    I am attempting to reconstruct the world position in the fragment shader from a depth texture. I pass in the 8 frustum points in world space and interpolate them across fragments and then interpolate from near to far by the depth: highp float depth = (2.0 * CameraPlanes.x) / (CameraPlanes.y + CameraPlanes.x - texture( depthTexture, textureCoord ).x * (CameraPlanes.y - CameraPlanes.x)); // Reconstruct the world position from the linear depth highp vec3 world = mix( nearWorldPos, farWorldPos, depth ); CameraPlanes.x is the near plane CameraPlanes.y is the far. Assuming that my frustum positions are correct, and my depth looks correct, why is my world position wrong? (My depth texture is of format GL_DEPTH_COMPONENT32F if that matters) Thanks! :D Update: Screenshot of world position http://imgur.com/sSlHd So you can see it looks nearly correct. However as the camera moves, the colours (positions) change, which they shouldnt. I can get this to work, if I do the following: Write this into the depth attachment in the previous pass: gl_FragDepth = gl_FragCoord.z / gl_FragCoord.w / CameraPlanes.y; and then read the depth texture like so: depth = texture( depthTexture, textureCoord ).x However this will kill the hardware z buffer optimizations.

    Read the article

  • Deferred rendering order?

    - by Nick Wiggill
    There are some effects for which I must do multi-pass rendering. I've got the basics set up (FBO rendering etc.), but I'm trying to get my head around the most suitable setup. Here's what I'm thinking... The framebuffer objects: FBO 1 has a color attachment and a depth attachment. FBO 2 has a color attachment. The render passes: Render g-buffer: normals and depth (used by outline & DoF blur shaders); output to FBO no. 1. Render solid geometry, bold outlines (as in toon shader), and fog; output to FBO no. 2. (can all render via a single fragment shader -- I think.) (optional) DoF blur the scene; output to the default frame buffer OR ELSE render FBO2 directly to default frame buffer. (optional) Mesh wireframes; composite over what's already in the default framebuffer. Does this order seem viable? Any obvious mistakes?

    Read the article

  • How can I get my meshes to work with Bullet Physics?

    - by Molmasepic
    The problem is that I'm trying to use my meshes with Bullet Physics for the collision part of my game. When I attempted doing this method with my GLM(model loading library by nate robins) model, I get a segmentation fault in the debug, so I figured that it doesnt like the coordinate variables of the model. If i use blender to export my model as a collision file, what type of file should I use? I have heard of a .bullet exporter, but i dont know hot to integrate this python script into my Blender 2.5 program.

    Read the article

  • fragment shader directional light positioning with camera

    - by meWantToLearn
    Im trying to set up directional lighting in the fragment shader. So the direction of my light moves with the camera position. #version 150 core uniform sampler2D diffuseTex; uniform vec4 lightColour; uniform vec3 lightDirection; vec3 LNorm = normalize(lightDirection); vec3 normal = normalize(IN.normal); vec3 calColour = lightColour[i].rgb * intensity; gl_FragColor = vec4(diffuse.rbg * calColour, diffuse.a); It lights the entire scene.

    Read the article

  • PNG file loading error in ImageMagick

    - by khanhhh89
    I'm trying to understand the tutorial 16 at http://ogldev.atspace.co.uk, which requires the image processing library ImageMagick. But when I run the tutorial, I encountered an following error: freeglut: failed to change scree settings Error loading textures 'test.png': no decode delegates for this image format 'C:/../appdata/magick-6024a_cIJcw90t-j'@error/constitute.c/ReadImage/552 I searched for google and found that my ImageMagick library do not have PNG Delegaes, but when I checked for the information of ImageMagick, it sees PNG in its delegate lists. Command line: convert -configure Result: LIB_VERSION 0x687 DELEGATES: bzlib, freetype, jpeg, jp2, lcms, png, tiff, x11, xml, wmf, zlib Could you explain to me this error, thanks so much!

    Read the article

  • Trying to install Proprietory Nvidia Graphics Drivers

    - by Peter Snow
    After reading and trying many different suggestions for some hours, I returned to this how-to: https://help.ubuntu.com/community/BinaryDriverHowto/Nvidia The first problem I encounter is how to identify which of the listed drivers support my Nvidia GEForce 630M graphics card. Following the links doesn't really help, since it is not stated there either (except where support for a new driver was added later which is explicitly stated, but the original devices covered are not). However, even if I knew, if it doesn't appear in the 'Additional Drivers' dialogue (see below), how will I install it? Second Issue: The article goes on to say that available drivers for my hardware are usually listed in 'Additional Drivers'. In my case, they aren't. Unfortunately, it doesn't tell me how to correct that or work around it? I've checked the bios and there is no way offered there to disable the integrated graphics, only the Nvidia graphics. I've also tried each available option in this: $ sudo update-alternatives --config i386-linux-gnu_gl_conf My system is an Acer Aspire 4752G bought May 2012. I'm running Ubuntu 12.04LTS. uname -a : 3.2.0-38-generic-pae #61-Ubuntu SMP Tue Feb 19 12:39:51 UTC 2013 i686 i686 i386 GNU/Linux It's 64bit hardware but I installed 32bit OS for greater software compatibility. Running $ sudo tail -fn 500 /var/log/Xorg.0.log | grep '(EE)' returns" (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 28.886] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) The reason for wanting the proprietor y drivers is because my laptop comes with 3D accelerated graphics adaptor and so rather than confining myself to struggling with the on-board graphics, I would rather use it. I also want to experiment with using it for bitmining (which uses the GPU's for computing power).

    Read the article

  • Depth buffer values reset on change shader?

    - by bobobobo
    I have 2 different shaders, and when I change the shader (glUseProgram), it seems that the depth information is lost, because everything drawn with the 2nd shader appears completely on top of anything drawn by the first shader. If I switch the order of shader use/drawing, then it's the same (the last drawn object always appears on top of the first drawn object if there is a shader change between the 2 objects, even if the last drawn object is further away)

    Read the article

  • 3d Picking under reticle

    - by Wolftousen
    i'm currently trying to work out some 3d picking code that I started years ago, but then lost interested the assignment was completed (this part wasn't actually part of the assignment). I am not using the mouse coords for picking, i'm just using the position in 3d space and a ray directly out from there. A small hitch though is that I want to use a cone and not a ray. Here are the variables i'm using: float iReticleSlope = 95/3000; //inverse reticle slope float baseReticle = 1; //radius of the reticle at z = 0 float maxRange = 3000; //max range to target Quaternion orientation; //the cameras orientation Vector3d position; //the cameras position Then I loop through each object in the world: Vector3d transformed; //object position after transformations float d, r; //holder variables for(i = 0; i < objects.length; i++) { transformed = position - objects[i].position; //transform the position relative to camera orientation.multiply(transformed); //orient the object relative to the camera if(transformed.z < 0) { d = sqrt(transformed[0] * transformed[0] + transformed[1] * transformed[1]); r = -transformed[2] * iReticleSlope + objects[i].radius; if(d < r && -transformed[2] - objects[i].radius <= maxRange) { //the object is under the reticle } else { //the object is not under the reticle } } else { //the object is not under the reticle } } Now this all works fine and dandy until the window ratio doesn't match the resolution ratio. Is there any simple way to account for that

    Read the article

  • Bad texture on model with different GPU

    - by Pacha
    I have some kind of distortion on the texture of my 3D model. It works perfectly well on an AMD GPU, but when testing on a integrated Intel HD graphics card it has a weird issue. I don't have a problem with the rest of my entities as they are not scaled. The models with the problems are scaled, as my engine supports different sizes for the platforms. I am using Ogre3D as rendering engine, and GLSL as shader language. Vertex shader: #version 120 varying vec2 UV; void main() { UV = gl_MultiTexCoord0; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment shader: #version 120 varying vec2 UV; uniform sampler2D diffuseMap; void main(void) { gl_FragColor = texture(diffuseMap, UV); } Screenshot (the error is on the right and left side, the top and bottom part are rendered perfectly well):

    Read the article

  • Infinite terrain shadows

    - by user35399
    I'm creating an infinite terrain engine, which generates the terrain either with fractals or noise. How can I make dynamic shadows for the sun on this terrain, if I don't know in advance what will be rendered in front of the sun. My terrain: The sun is the only light, it is directional, my terrain is generated on a plane which is positioned before the camera, frustum culled and fits the size of the viewing frustum. It is height mapped with generated noise texture, and using tessellation shaders on it. Video:http://www.youtube.com/watch?v=tk6yFwYusOs Dynamic shadows with the infinite terrain.

    Read the article

  • HIB Games (Aquaria & Penumbra) cannot find libGL.so.1 even though it exists

    - by aberration
    I'm try to play some Humble Indie Bundle (HIB) games, but I'm getting errors with Aquaria and Penumbra: Overture that are related to the libGL.so.1 file. Aquaria gives this error on launch: Message: SDL_GL_LoadLibrary Error: Failed loading libGL.so.1 And Penumbra: Overture gives this error on launch: ./penumbra.bin: error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory I know that the file libGL.so.1 does exist (in /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1). From past errors like this, I'm guessing that you need to symlink the library to another directory, but I can't figure out which one.

    Read the article

  • I get GL_INVALID_VALUE after calling glTexSubImage2D

    - by user892644
    I am trying to figure out why my texture allocation does not work. Here is the code: glTexStorage2D(GL_TEXTURE_2D, 2, GL_RGBA8, 2048, 2048); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 2048, 2048, GL_RGB, GL_UNSIGNED_SHORT_5_6_5_REV, &BitMap[0]); glTexSubImage2D returns GL_INVALID_VALUE but the maximum texture allowed is 16384x16384 on my card. The source of the image is 16bit (Red 5, Green 6, Blue 5).

    Read the article

  • Creating a voxel world with 3D arrays using threads

    - by Sean M.
    I am making a voxel game (a bit like Minecraft) in C++(11), and I've come across an issue with creating a world efficiently. In my program, I have a World class, which holds a 3D array of Region class pointers. When I initialize the world, I give it a width, height, and depth so it knows how large of a world to create. Each Region is split up into a 32x32x32 area of blocks, so as you may guess, it takes a while to initialize the world once the world gets to be above 8x4x8 Regions. In order to alleviate this issue, I thought that using threads to generate different levels of the world concurrently would make it go faster. Having not used threads much before this, and being still relatively new to C++, I'm not entirely sure how to go about implementing one thread per level (level being a xz plane with a height of 1), when there is a variable number of levels. I tried this: for(int i = 0; i < height; i++) { std::thread th(std::bind(&World::load, this, width, height, depth)); th.join(); } Where load() just loads all Regions at height "height". But that executes the threads one at a time (which makes sense, looking back), and that of course takes as long as generating all Regions in one loop. I then tried: std::thread t1(std::bind(&World::load, this, w, h1, h2 - 1, d)); std::thread t2(std::bind(&World::load, this, w, h2, h3 - 1, d)); std::thread t3(std::bind(&World::load, this, w, h3, h4 - 1, d)); std::thread t4(std::bind(&World::load, this, w, h4, h - 1, d)); t1.join(); t2.join(); t3.join(); t4.join(); This works in that the world loads about 3-3.5 times faster, but this forces the height to be a multiple of 4, and it also gives the same exact VAO object to every single Region, which need individual VAOs in order to render properly. The VAO of each Region is set in the constructor, so I'm assuming that somehow the VAO number is not thread safe or something (again, unfamiliar with threads). So basically, my question is two one-part: How to I implement a variable number of threads that all execute at the same time, and force the main thread to wait for them using join() without stopping the other threads? How do I make the VAO objects thread safe, so when a bunch of Regions are being created at the same time across multiple threads, they don't all get the exact same VAO? Turns out it has to do with GL contexts not working across multiple threads. I moved the VAO/VBO creation back to the main thread. Fixed! Here is the code for block.h/.cpp, region.h/.cpp, and CVBObject.h/.cpp which controls VBOs and VAOs, in case you need it. If you need to see anything else just ask. EDIT: Also, I'd prefer not to have answers that are like "you should have used boost". I'm trying to do this without boost to get used to threads before moving onto other libraries.

    Read the article

  • How to pass one float as four unsigned chars to shader by glVertexPointAttrib?

    - by Kog
    For each vertex I use two floats as position and four unsigned bytes as color. I want to store all of them in one table, so I tried casting those four unsigned bytes to one float, but I am unable to do that correctly... All in all, my tests came to one point: GLfloat vertices[] = { 1.0f, 0.5f, 0, 1.0f, 0, 0 }; glEnableVertexAttribArray(0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), vertices); // VER1 - draws red triangle // unsigned char colors[] = { 0xff, 0, 0, 0xff, 0xff, 0, 0, 0xff, 0xff, 0, 0, // 0xff }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors); // VER2 - draws greenish triangle (not "pure" green) // float f = 255 << 24 | 255; //Hex:0xff0000ff // float colors2[] = { f, f, f }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors2); // VER3 - draws red triangle int i = 255 << 24 | 255; //Hex:0xff0000ff int colors3[] = { i, i, i }; glEnableVertexAttribArray(1); glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), colors3); glDrawArrays(GL_TRIANGLES, 0, 3); Above code is used to draw one simple red triangle. My question is - why do versions 1 and 3 work correctly, while version 2 draws some greenish triangle? Hex values are one I read by marking variable during debug. They are equal for version 2 and 3 - so what causes the difference?

    Read the article

  • Easy road from DisplayObject to Molehill?

    - by Bart van Heukelom
    I have a finished Flash game which is rendered using the built-in display tree, i.e. Bitmaps contained in Sprites (and a text here and there, few vector graphics, and one bitmap-filled shape). For extra performance, I'd like it to use Molehill for rendering, but that's not possible out of the box. What's the easiest way to make this game use Molehill when available, but fall back to the current method if it's not available?

    Read the article

  • Edge flicker when moving Camera (2D)

    - by Matthias Reisner
    I have a Orthographic camera. I have a fixed landscape texture and a texture for a moveable object. If the object moves to the right the camera will also move with the object. When I also draw an score text that should have fixed position on the screen, that score text position will be update too if the camera's position gets updated so that it looks like that it is fixed on the screen. But if I do that, I have some edge flickering at the text object. I'am using SpriteBatch! Is there another approach to implement a fixed positioned object on the screen?

    Read the article

  • Efficient skeletal animation

    - by Will
    I am looking at adopting a skeletal animation format (as prompted here) for an RTS game. The individual representation of each model on-screen will be small but there will be lots of them! In skeletal animation e.g. MD5 files, each individual vertex can be attached to an arbitrary number of joints. How can you efficiently support this whilst doing the interpolation in GLSL? Or do engines do their animation on the CPU? Or do engines set arbitrary limits on maximum joints per vertex and invoke nop multiplies for those joints that don't use the maximum number? Are there games that use skeletal animation in an RTS-like setting thus proving that on integrated graphics cards I have nothing to worry about in going the bones route?

    Read the article

  • NPOT texture and video memory usage

    - by Eonil
    I read in this QA that NPOT will take memory as much as next POT sized texture. It means it doesn't give any benefit than POT texture with proper management. (maybe even worse because NPOT should be slower!) Is this true? Does NPOT texture take and waste same memory like POT texture? I am considering NPOT texture for post-processing, so if it doesn't give memory space benefit, using of NPOT texture is meaningless to me. Maybe answer can be different for each platforms. I am targeting mobile devices. Such as iPhone or Androids. Does NPOT texture takes same amount of memory on mobile GPUs?

    Read the article

  • What library should I use for 2D Geometry? [closed]

    - by Luka
    I've been working on a 2D game in java, but found that java just didn't cut it for me and had forced me to a lot of bad design choices, so I've decided to port all my work to c++. The main reason I've decided change to c++ is that i had reached a point where i had 3 geometry libraries (the native, one from the game engine and one to handle "complex" polygons), none of witch worked very well together and i couldn't keep track of them. I'm new to c++, but i know all the basics. My question is, what would be a good geometry library to use, ideally it should be able to handle integer and decimal data types, have point, line, and polygon classes witch are able to check for intersection and contains. Thanks in advance, Luka

    Read the article

  • Rendering only a part of the screen in high detail

    - by Bart van Heukelom
    If graphics are rendered for a large viewing angle (e.g. a very large TV or a VR headset), the viewer can't actually focus on the entire image, just a part of it. (Actually, this is the case for regular sized screens as well.) Combined with a way to track the viewer's eyes, you could theoretically exploit this and render the graphics away from the viewer's focus with progressively less details and resolution, gaining performance, without losing perceived quality. Are there any techniques for this available or under development today?

    Read the article

  • Android opengles 2.0 :different resolutions rendering and input

    - by kkan
    I'm currently developing a sprite based 2D game for android using opengles 2.0. I've got some basic rendering done that mimics the spritebatch functionality of xna (draw sprite, rotation, color). But all of this works for a fixed projection matrix, but android has a lot of screen sizes. Q1)Would this be an okay method to scale up/down the drawing? 1)Draw the whole screen to a texture. 2)Draw the above texture as a quad to the device. I found the above through some searching, not sure if it's the best one, are there any alternatives? Q2)How do you handle inputs for different resolutions? I currently get the position of a touch and use it raw. Would it be okay to get the position, and then scale the position to size of the texture used for rendering, and the perform calculations on it? Thanks.

    Read the article

  • I don't understand why one of my vbo is overwritten by another

    - by Alays
    to create a vbo I use this function: public void loadVBO(){ vboID = GL15.glGenBuffers(); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboID); GL15.glBufferData(GL15.GL_ARRAY_BUFFER, buf, GL15.GL_STATIC_DRAW); // Put the position coordinates in attribute list 0 GL20.glVertexAttribPointer(0, 4, GL11.GL_FLOAT, false,4*4+4*4+4*4+2*4 , 0); // Put the color components in attribute list 1 GL20.glVertexAttribPointer(1, 4, GL11.GL_FLOAT, false,4*4+4*4+4*4+2*4 , 4*4); GL20.glVertexAttribPointer(2, 4, GL11.GL_FLOAT, false,4*4+4*4+4*4+2*4 , 4*4+4*4); // Put the texture coordinates in attribute list 2 GL20.glVertexAttribPointer(3, 4, GL11.GL_FLOAT, false,4*4+4*4+4*4+2*4 , 4*4+4*4+4*4); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); } to display a vbo I use this function: public void displayVBO(){ GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, vboID); GL20.glEnableVertexAttribArray(0); GL20.glEnableVertexAttribArray(1); GL20.glEnableVertexAttribArray(2); GL20.glEnableVertexAttribArray(3); GL11.glDrawArrays(GL_TRIANGLES, 0, buf.capacity()); GL20.glDisableVertexAttribArray(0); GL20.glDisableVertexAttribArray(1); GL20.glDisableVertexAttribArray(2); GL20.glDisableVertexAttribArray(3); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); } So when I call map.loadVBO() and then ocean.loadVBO(), I think the second call overwrite the first vbo I don't know how ... When I call map.display() and ocean.display(), I have the ocean draw 2 times .... Thanks.

    Read the article

  • How to tell what part of a 3D cube was touched

    - by user2539517
    I am writing a rather simple android game and I am implementing Open GL to draw a 3D cube that spins upon the X, Y and Z axis and I need to know where the user has clicked on the texture of the cube. The texture is a simple square bitmap (100x100) that has a smaller square in the center. I need to know if the user touches the inner square. As well was tell which face of the cube the user touches. Does anyone know how this can be accomplished if not can anyone give some pseudo code on how to tell where the ray correlates to the texture? Or at least point me in the right direction. The textures of each face are like this: The code I am using is from: http://www3.ntu.edu.sg/home/ehchua/programming/android/Android_3D.html2.9 It is a port to android from Lesson 6 NeHe. Example 6a: Photo-Cube

    Read the article

  • Zooming to point of interest

    - by user1010005
    I have the following variables: Point of interest which is the position(x,y) in pixels of the place to focus. Screen width,height which are the dimensions of the window. Zoom level which sets the zoom level of the camera. And this is the code I have so far. void Zoom(int pointOfInterestX,int pointOfInterstY,int screenWidth, int screenHeight,int zoomLevel) { glTranslatef( (pointOfInterestX/2 - screenWidth/2), (pointOfInterestY/2 - screenHeight/2),0); glScalef(zoomLevel,zoomLevel,zoomLevel); } And I want to do zoom in/out but keep the point of interest in the middle of the screen. but so far all of my attempts have failed and I would like to ask for some help.

    Read the article

  • How do I use unpackHalf2x16?

    - by user1032861
    I'm trying to use (un)packHalf2x16, without success so far. I'm drawing with: glVertexAttribIPointer(0, 2, GL_UNSIGNED_INT, 0, 0); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vbo); glDrawArrays(GL_POINTS, 0, n_points); glDisableVertexAttribArray(0); and on the shader #version 330 core #extension GL_ARB_shading_language_packing : require in uvec2 A0; // (...) vec4 t = vec4(unpackHalf2x16(A0.x), unpackHalf2x16(A0.y)); But nothing gets drawn. I'm pretty sure buffer's content is right, and if I use vec4 t = vec4(0); I can see it's working properly. How is this packing / unpacking thing supposed to work? I can't find any example.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >