Search Results

Search found 698 results on 28 pages for 'shader'.

Page 21/28 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • how to add water effect to an image

    - by brainydexter
    This is what I am trying to achieve: A given image would occupy say 3/4th height of the screen. The remaining 1/4th area would be a reflection of it with some waves (water effect) on it. I'm not sure how to do this. But here's my approach: render the given texture to another texture called mirror texture (maybe FBOs can help me?) invert mirror texture (scale it by -1 along Y) render mirror texture at height = 3/4 of the screen add some sense of noise to it OR using pixel shader and time, put pixel.z = sin(time) to make it wavy (Tech: C++/OpenGL/glsl) Is my approach correct ? Is there a better way to do this ? Also, can someone please recommend me if using FrameBuffer Objects would be the right thing here ? Thanks

    Read the article

  • Order independent transparency in particle system

    - by Stepan Zastupov
    I'm writing a particle system and would like to find a trick to achieve proper alpha blending without sorting particles because: Each particle is a point sprite in a single mesh and I can't use scene graph ability to sort transparent nodes. The system node should be properly sorted, though. Particle position is computed on shader from initial velocity, acceleration and time. In order to sort the system I would have to perform all this computations on CPU, which is something I want to avoid. Sorting hundreds of particles against camera position and uploading it on GPU each frame seams to be quiet heavy operation. Alpha testing seems to be fast enough on GLES 2.0 and works fine for non-transparent but "masked" textures. Still, it's not enough for semi-transparent particles. How would you handle this?

    Read the article

  • Is a Single Texture Cube Map Possible?

    - by smoth190
    I'm currently developing a test project to explore OpenGL 3 texturing abilities. I have a simple cube, made of 8 vertices and 36 indices. I want each of the cubes faces to have a different texture, so I devised this texture: I made it obvious which sections I want visible (I hope...). In Direct3D, I once made a skybox, and I used a cubemap. However, I had to split it into 6 different textures. This is annoying and hard to manage, it would be nice to have just one texture. Is this even possible? I read somewhere that I could do this by duplicating vertices, is that a good idea? Someone else said I could do it in the shader, but that also baffles me...

    Read the article

  • Impact of variable-length loops on GPU shaders

    - by Will
    Its popular to render procedural content inside the GPU e.g. in the demoscene (drawing a single quad to fill the screen and letting the GPU compute the pixels). Ray marching is popular: This means the GPU is executing some unknown number of loop iterations per pixel (although you can have an upper bound like maxIterations). How does having a variable-length loop affect shader performance? Imagine the simple ray-marching psuedocode: t = 0.f; while(t < maxDist) { p = rayStart + rayDir * t; d = DistanceFunc(p); t += d; if(d < epsilon) { ... emit p return; } } How are the various mainstream GPU families (Nvidia, ATI, PowerVR, Mali, Intel, etc) affected? Vertex shaders, but particularly fragment shaders? How can it be optimised?

    Read the article

  • Skip the first RenderTarget when writing to MRT with Opaque blending

    - by cubrman
    I am writing to three rendertargets and whant to know how to tell a GPU not to write to the first RT. When you write a shader you can simply output less data than you have RTs (like output a single float4 when writing to three RTs) and only the first RTs will be affected, but you cannot specify to output this data anywhere else but to COLOR0, then 1, etc. Is there a way to write to several RTs but skip the first target? If I output zeroes, the data in the target will become zeroes, but I need it to remain untuched in the first target and only change in the specified ones. The reason I need this is to prevent data loss when calling SetRendertarget() with DiscardContents RTs. I write to all the RTs at one point and I need to write to only the specified ones afterwards. It must be the first texture as I have a depth buffer linked to it (XNA 4.0). Thanks.

    Read the article

  • scaling point sprites with distance

    - by Will
    How can you scale a point sprite by its distance from the camera? GLSL fragment shader: gl_PointSize = size / gl_Position.w; seems along the right tracks; for any given scene all sprites seem nicely scaled by distance. Is this correct? How do you compute the proper scaling for my vertex attribute size? I want each sprite to be scaled by the modelview matrix. I had played with arbitrary values and it seems that size is the radius in pixels at the camera, and is not in modelview scale. I've also tried: gl_Position = pMatrix * mvMatrix * vec4(vertex,1.0); vec4 v2 = pMatrix * mvMatrix * vec4(vertex.x,vertex.y+0.5*size,vertex.z,1.0); gl_PointSize = length(gl_Position.xyz-v2.xyz) * gl_Position.w; But this makes the sprites be bigger in the distance, rather than smaller:

    Read the article

  • How should I interpret these DirectX Caps Viewer values?

    - by tobi
    Briefly asking - what do the nodes mean and what the difference is between them in DirectX Caps Viewer? DXGI Devices Direct3D9 Devices DirectDraw Devices The most interesting for me is 1 vs 2. In the Direct3D9 Devices under HAL node I can see that my GeForce 8800GT supports PixelShaderVersion 3.0. However, under DXGI Devices I have DX 10, DX 10.1 and DX 11 having Shader model 4.0 (actually why DX 11? My card is not compatible with DX 11). I am implementing a DX 11 application (including d3d11.h) with shaders compiled in 4.0 version, so I can clearly see that 4.0 is supported. What is the difference between 1 and 2? Could you give me some theory behind the nodes?

    Read the article

  • glsl demo suggestions ?

    - by brainydexter
    In a lot of places I interviewed recently, I have been asked many a times if I have worked with shaders. Even though, I have read and understand the pipeline, the answer to that question has been no. Recently, one of the places asked me if I can send them a sample of 'something' that is "visually polished". So, I decided to take the plunge and wrote some simple shader in GLSL(with opengl).I now have a basic setup where I can use vbos with glsl shaders. I have a very short window left to send something to them and I was wondering if someone with experience, could suggest an idea that is interesting enough to grab someone's attention. Thanks

    Read the article

  • What functionality should I use in OpenGL 2.0?

    - by Jeffrey
    Considering OpenGL 2.1, we all know that glBegin and glEnd are the devil. Should I use only VBO to render 3d primitives (I can't find VAO in that version, weren't there already?)? Should I still use the matrix stack (why not?)? Should I still use glFrustum? Can I take advantage of shaders in GLSL 1.20? Where can I find a tutorial for VBO in OpenGL 2.1 and the "correct" way of programming in it? Also how am I supposed to animate something. Like a cube moving around an object or a player moving in the scene (static vbo data + shader?)? Note: Take your time to answer this question, I'll accept an answer tomorrow.

    Read the article

  • What is the difference between Constant Vertex Attributes and Uniforms?

    - by Samaursa
    According to the OpenGL ES 2.0 Programming Guide: A constant vertex attribute is the same for all vertices of a primitive, and therefore only one value needs to be specified for all the vertices of a primitive. For uniforms the book states: ...any parameter to a shader that is constant across either all vertices or fragments (but that is not known at compile time) should be passed in as a uniform. I've always used uniforms for data that is constant for a primitive but now it appears that attributes can also be used in the same way. Is there more to constant vertex attribute than simply 'they are the same as uniforms'?

    Read the article

  • Calculating vertex normals on the GPU

    - by Etan
    I have some height-map sampled on a regular grid stored in an array. Now, I want to use the normals on the sampled vertices for some smoothing algorithm. The way I'm currently doing it is as follows: For each vertex, generate triangles to all it's neighbours. This results in eight neighbours when using the 1-neighbourhood for all vertices except at the borders. +---+---+ ¦ \ ¦ / ¦ +---o---+ ¦ / ¦ \ ¦ +---+---+ For each adjacent triangle, calculate it's normal by taking the cross product between the two distances. As the triangles all have the same size when projected on the xy-plane, I simply average over all eight normals then and store it for this vertex. However, as my data grows larger, this approach takes too much time and I would prefer doing it on the GPU in a shader code. Is there an easy method, maybe if I could store my height-map as a texture?

    Read the article

  • How to acheive a smooth 2D lighting effect?

    - by Cyral
    I'm making a tile based game in XNA So currently my lightning looks like this: How can I get it to look like this? Instead of each block having its own tint, it has a smooth overlay. I'm assuming some sort of shader, and to tell it the lighting and blur it some how. But im not an expert with shaders. My current lighting calculates the light, and then passes it to a spritebatch and draws with a color parameter EDIT: No longer uses spritebatch tint, I was testing and now pass parameters to set the light values. But still looking for a way to smooth it

    Read the article

  • Confusion on HLSL Samplers. Can I Set Samplers Inside Functions?

    - by Kyle Connors
    I'm trying to create a system where I can instance a quad to the screen, however I've run into a problem. Like I said, I'm trying to instance the quad, so I'm trying to use the same geometry several times, and I'm trying to do it in one draw call. The issue is, I want some quads to use different textures, but I can't figure out how to get the data into a sampler so I can use it in the pixel shader. I figured that since we can simply pass in the 4 bytes of our IDirect3DTexture9* to set the global texture, I can do so when passing in my dynamic buffer. (Which also stores each objects world matrix and UV data) Now that I'm sending the data, I can't figure how to get it into the sampler, and I really want to assume that it's simply not possible. Is there any way I could achieve this?

    Read the article

  • Google Earth-Unsupported graphics card

    - by VIPaul
    I've just installed Google Earth on my PC,which runs Ubuntu 12.04 LTS. When I open Google Earth,a window pop-ups and says:"Unsupported Graphics Card Your graphics card does nor meet the minimum spec required to run Google Earth,which is a 3D accelerated card with shader support.It is strongly recommended that you try running Google Earth on a different machine or in a different rendering mode or upgrade to a newer graphics card.You may continue,but the application is unlikely to work." Maybe you'll say:"Buy a better graphics card!",but I used Google Earth on this machine an year ago,when I had Windows 7 & everything worked well,so my graphics card is good enough. The Linux version has bigger requirements than the Windows one or what???

    Read the article

  • How to acheive a smoother lighting effect

    - by Cyral
    I'm making a tile based game in XNA So currently my lightning looks like this: How can I get it to look like this? Instead of each block having its own tint, it has a smooth overlay. I'm assuming some sort of shader, and to tell it the lighting and blur it some how. But im not an expert with shaders. My current lighting calculates the light, and then passes it to a spritebatch and draws with a color parameter EDIT: No longer uses spritebatch tint, I was testing and now pass parameters to set the light values. But still looking for a way to smooth it

    Read the article

  • Artifacts when draw particles with some alpha

    - by piotrek
    I want to draw in my game some particles. But when I draw one particle above another particle, alpha channel from this above "clear" previous drawed particle. I set in OpenGL blend in this way: glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA ); My fragment shader for particle is very simple: precision highp float; precision highp int; uniform sampler2D u_maps[2]; varying vec2 v_texture; uniform float opaque; uniform vec3 colorize; void main() { vec4 texColor = texture2D(u_maps[0], v_texture); gl_FragColor.rgb = texColor.rgb * colorize.rgb; gl_FragColor.a = texColor.a * opaque; } I attach screenshot from this: Do you know what I made wrong ? I use OpenGL ES 2.0.

    Read the article

  • Good resources for learning about graphics hardware

    - by Ken
    I'm looking for some good learning resources for graphics hardware (and associated low level software). Basically I want to learn more about what goes on underneath the opengl/direcx API layers in terms of how things are implemented. I familiar with what happens in principle during the various stages of the rendering pipeline (viewing, projection, clipping, rasterization etc). My goal is to be able to make better and more informed decisions about tradeoffs and potential optimisations when graphics/shader programming with respect to the following kinds of issues; batching view culling occlusions draw order avoiding state changes triangles vs pointsprites texture sampling etc Basically whatever the graphics programmer needs to know about modern graphics hardware in order to become more effective. I'm not really looking for specific optimisation techniques, rather I need more general knowledge so that I will naturally write more efficient code.

    Read the article

  • How do I use unpackHalf2x16?

    - by user1032861
    I'm trying to use (un)packHalf2x16, without success so far. I'm drawing with: glVertexAttribIPointer(0, 2, GL_UNSIGNED_INT, 0, 0); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vbo); glDrawArrays(GL_POINTS, 0, n_points); glDisableVertexAttribArray(0); and on the shader #version 330 core #extension GL_ARB_shading_language_packing : require in uvec2 A0; // (...) vec4 t = vec4(unpackHalf2x16(A0.x), unpackHalf2x16(A0.y)); But nothing gets drawn. I'm pretty sure buffer's content is right, and if I use vec4 t = vec4(0); I can see it's working properly. How is this packing / unpacking thing supposed to work? I can't find any example.

    Read the article

  • Phone complains that identical GLSL struct definition differs in vert/frag programs

    - by stephelton
    When I provide the following struct definition in linked frag and vert shaders, my phone (Samsung Vibrant / Android 2.2) complains that the definition differs. struct Light { mediump vec3 _position; lowp vec4 _ambient; lowp vec4 _diffuse; lowp vec4 _specular; bool _isDirectional; mediump vec3 _attenuation; // constant, linear, and quadratic components }; uniform Light u_light; I know the struct is identical because its included from another file. These shaders work on a linux implementation and on my Android 3.0 tablet. Both shaders declare "precision mediump float;" The exact error is: Uniform variable u_light type/precision does not match in vertex and fragment shader Am I doing anything wrong here, or is my phone's implementation broken? Any advice (other than file a bug report?)

    Read the article

  • Transparent parts of texture are opaque black instead

    - by Aaron
    I render a sprite twice, one on top of the other. The sprites have transparent parts, so I should be able to see the bottom sprite under the top sprite. The transparent parts are black (the clear colour) and opaque instead though and the topmost sprite blocks the bottom sprite. My fragment shader is trivial: uniform sampler2D texture; varying vec2 f_texcoord; void main() { gl_FragColor = texture2D(texture, f_texcoord); } I have glEnable(GL_BLEND) and glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) in my initialization code. My texture comes from a PNG file that I load with libpng. I'm sure to use GL_RGBA when initializing the texture with glTexImage2D (otherwise the sprites look like noise).

    Read the article

  • Is it only possible to display 64k vertices on the monitor with 16bit?

    - by Aufziehvogel
    I did the first 3D tutorial over at riemers.net and stumbled upon that my graphic card only supports Shader 2.0 (Reach profile in XNA) which means I can only use Int16 to store the indices (triangle to vertex). This means that I can only store 2^16 = 65536 vertices. Also I read on the internet that you should prefer 16-bit over 32-bit because not all hardware (like mine) does support 32-bit. Yet, I am wondering: Do really all game scenes get along with only so little vertices? I though already faces of people used a lot of polygons (which are made up of vertices?). It’s not relevant for me yet, but I am interested: Do game scenes use only 65536 vertices? Do you use some trade-off to display more (e.g. 64k in GPU buffer rest on RAM) Is there some method to get more into the GPU buffer? I already read on some other posts that there seems to be a limit of 64k per mesh too, so maybe you can compact stuff to meshes?

    Read the article

  • Per fragment lighting with OpenGL 4.x tessellated model

    - by Finlaybob
    I'm experienced with OpenGL 3+. I'm dabbling with tessellation shaders and have now got to a point where I have a nicely tessellated teapot/plane demo (quick look here) As can be seen from the screenshots, the lighting is broken (though admittedly doesn't look too bad in the image) I've tried to add a normal map to the equation but it still doesn't come out right, I can calculate the normals, tangents and binormals per triangle in the geometry shader but still looks wrong. I think the question would be; How do I add per fragment lighting to a tessellated model? The teapot is 32 16-point patches, the plane is one single 16 point patch. The shaders are here, but they are a complete mess, so I don't blame anyone who cant make sense of them. But peruse at your leisure if you like. Also, if this question is more suited to be somewhere else i.e. Stack Overflow or the Programming stack please let me know.

    Read the article

  • How many textures can usually I bind at once?

    - by Avi
    I'm developing a game engine, and it's only going to work on modern (Shader model 4+) hardware. I figure that, by the time I'm done with it, that won't be such an unreasonable requirement. My question is: how many textures can I bind at once on a modern graphics card? 16 would be sufficient. Can I expect most modern graphics cards to support that amount? My GTX 460 appears to support 32, but I have no idea if that's representative of most modern video cards.

    Read the article

  • rotate opengl mesh relative to camera

    - by shuall
    I have a cube in opengl. It's position is determined by multiplying it's specific model matrix, the view matrix, and the projection matrix and then passing that to the shader as per this tutorial (http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/). I want to rotate it relative to the camera. The only way I can think of getting the correct axis is by multiplying the inverse of the model matrix (because that's where all the previous rotations and tranforms are stored) times the view matrix times the axis of rotation (x or y). I feel like there's got to be a better way to do this like use something other than model, view and projection matrices, or maybe I'm doing something wrong. That's what all the tutorials I've seen use. PS I'm also trying to keep with opengl 4 core stuff. edit: If quaternions would fix my problems, could someone point me to a good tutorial/example for switching from 4x4 matrices to quaternions. I'm a little daunted by the task.

    Read the article

  • Microsoft XNA code sample wont work with blender model

    - by FreakinaBox
    I downloaded this code sample and integrated it into my game http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing It works with the model that they supplied, but throws and exception whenever I use one of my models. The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing. I tried pluging my model into their original source code and same thing. My model is an fbx from blender and has a texture. This is the function that throws the error GraphicsDevice.DrawInstancedPrimitives( PrimitiveType.TriangleList, 0, 0, meshPart.NumVertices, meshPart.StartIndex, meshPart.PrimitiveCount, instances.Length );

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >