Search Results

Search found 2086 results on 84 pages for 'pixel shader'.

Page 29/84 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • What is the correct way to reset and load new data into GL_ARRAY_BUFFER?

    - by Geto
    I am using an array buffer for colors data. If I want to load different colors for the current mesh in real time what is the correct way to do it. At the moment I am doing: glBindVertexArray(vao); glBindBuffer(GL_ARRAY_BUFFER, colorBuffer); glBufferData(GL_ARRAY_BUFFER, SIZE, colorsData, GL_STATIC_DRAW); glEnableVertexAttribArray(shader->attrib("color")); glVertexAttribPointer(shader->attrib("color"), 3, GL_FLOAT, GL_TRUE, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, 0); It works, but I am not sure if this is good and efficient way to do it. What happens to the previous data ? Does it write on top of it ? Do I need to call : glDeleteBuffers(1, colorBuffer); glGenBuffers(1, colorBuffer); before transfering the new data into the buffer ?

    Read the article

  • Creating a WARP device in managed DirectX

    - by arex
    I have a very old graphic card that only supports shader model 2, but I need shader model 3 or up for the app I am developing. I tried to use a reference device but it seems to run very slowly, then I found some samples in C++ that allows me to change to a WARP device and the performance is good. I am using C# and I don't know how to create such type of device. So the question is: how do I create a WARP device in C#? Thanks in advance.

    Read the article

  • Problem when trying to use simple Shaders + VBOs

    - by Mr.Gando
    Hello I'm trying to convert the following functions to a VBO based function for learning purposes, it displays a static texture on screen. I'm using OpenGL ES 2.0 with shaders on the iPhone (should be almost the same than regular OpenGL in this case), this is what I got working: //Works! - (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth { GLfloat coordinates[] = { 0, 1, 1, 1, 0, 0, 1, 0 }; GLfloat width = (GLfloat)_width * _maxS, height = (GLfloat)_height * _maxT; GLfloat vertices[] = { -width / 2 + point.x, -height / 2 + point.y, width / 2 + point.x, -height / 2 + point.y, -width / 2 + point.x, height / 2 + point.y, width / 2 + point.x, height / 2 + point.y, }; glBindTexture(GL_TEXTURE_2D, _name); //Attrib position and attrib_tex coord are handles for the shader attributes glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, vertices); glEnableVertexAttribArray(ATTRIB_POSITION); glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates); glEnableVertexAttribArray(ATTRIB_TEXCOORD); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } I tried to do this to convert to a VBO however I don't see anything displaying on-screen with this version: //Doesn't display anything - (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth { GLfloat width = (GLfloat)_width * _maxS, height = (GLfloat)_height * _maxT; GLfloat position[] = { -width / 2 + point.x, -height / 2 + point.y, width / 2 + point.x, -height / 2 + point.y, -width / 2 + point.x, height / 2 + point.y, width / 2 + point.x, height / 2 + point.y, }; //Texture on-screen position ( each vertex is x,y in on-screen coords ) GLfloat coordinates[] = { 0, 1, 1, 1, 0, 0, 1, 0 }; // Texture coords from 0 to 1 glBindVertexArrayOES(vao); glGenVertexArraysOES(1, &vao); glGenBuffers(2, vbo); //Buffer 1 glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), position, GL_STATIC_DRAW); glEnableVertexAttribArray(ATTRIB_POSITION); glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, position); //Buffer 2 glBindBuffer(GL_ARRAY_BUFFER, vbo[1]); glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), coordinates, GL_DYNAMIC_DRAW); glEnableVertexAttribArray(ATTRIB_TEXCOORD); glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates); //Draw glBindVertexArrayOES(vao); glBindTexture(GL_TEXTURE_2D, _name); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } In both cases I'm using this simple Vertex Shader //Vertex Shader attribute vec2 position;//Bound to ATTRIB_POSITION attribute vec4 color; attribute vec2 texcoord;//Bound to ATTRIB_TEXCOORD varying vec2 texcoordVarying; uniform mat4 mvp; void main() { //You CAN'T use transpose before in glUniformMatrix4fv so... here it goes. gl_Position = mvp * vec4(position.x, position.y, 0.0, 1.0); texcoordVarying = texcoord; } The gl_Position is equal to product of mvp * vec4 because I'm simulating glOrthof in 2D with that mvp And this Fragment Shader //Fragment Shader uniform sampler2D sampler; varying mediump vec2 texcoordVarying; void main() { gl_FragColor = texture2D(sampler, texcoordVarying); } I really need help with this, maybe my shaders are wrong for the second case ? thanks in advance.

    Read the article

  • opengl 3d texture issue

    - by user1478217
    Hi i'm trying to use a 3d texture in opengl to implement volume rendering. Each voxel has an rgba colour value and is currently rendered as a screen facing quad.(for testing purposes). I just can't seem to get the sampler to give me a colour value in the shader. The quads always end up black. When I change the shader to generate a colour (based on xyz coords) then it works fine. I'm loading the texture with the following code: glGenTextures(1, &tex3D); glBindTexture(GL_TEXTURE_3D, tex3D); unsigned int colours[8]; colours[0] = Colour::AsBytes<unsigned int>(Colour::Blue); colours[1] = Colour::AsBytes<unsigned int>(Colour::Red); colours[2] = Colour::AsBytes<unsigned int>(Colour::Green); colours[3] = Colour::AsBytes<unsigned int>(Colour::Magenta); colours[4] = Colour::AsBytes<unsigned int>(Colour::Cyan); colours[5] = Colour::AsBytes<unsigned int>(Colour::Yellow); colours[6] = Colour::AsBytes<unsigned int>(Colour::White); colours[7] = Colour::AsBytes<unsigned int>(Colour::Black); glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA, 2, 2, 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, colours); The colours array contains the correct data, i.e. the first four bytes have values 0, 0, 255, 255 for blue. Before rendering I bind the texture to the 2nd texture unit like so: glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_3D, tex3D); And render with the following code: shaders["DVR"]->Use(); shaders["DVR"]->Uniforms["volTex"].SetValue(1); shaders["DVR"]->Uniforms["World"].SetValue(Mat4(vl_one)); shaders["DVR"]->Uniforms["viewProj"].SetValue(cam->GetViewTransform() * cam->GetProjectionMatrix()); QuadDrawer::DrawQuads(8); I have used these classes for setting shader params before and they work fine. The quaddrawer draws eight instanced quads. The vertex shader code looks like this: #version 330 layout(location = 0) in vec2 position; layout(location = 1) in vec2 texCoord; uniform sampler3D volTex; ivec3 size = ivec3(2, 2, 2); uniform mat4 World; uniform mat4 viewProj; smooth out vec4 colour; void main() { vec3 texCoord3D; int num = gl_InstanceID; texCoord3D.x = num % size.x; texCoord3D.y = (num / size.x) % size.y; texCoord3D.z = (num / (size.x * size.y)); texCoord3D /= size; texCoord3D *= 2.0; texCoord3D -= 1.0; colour = texture(volTex, texCoord3D); //colour = vec4(texCoord3D, 1.0); gl_Position = viewProj * World * vec4(texCoord3D, 1.0) + (vec4(position.x, position.y, 0.0, 0.0) * 0.05); } uncommenting the line where I set the colour value equal to the texcoord works fine, and makes the quads coloured. The fragment shader is simply: #version 330 smooth in vec4 colour; out vec4 outColour; void main() { outColour = colour; } So my question is, what am I doing wrong, why is the sampler not getting any colour values from the 3d texture? [EDIT] Figured it out but can't self answer (new user): As soon as I posted this I figured it out, I'll put the answer up to help anyone else (it's not specifically a 3d texture issue, and i've also fallen afoul of it before, D'oh!). I didn't generate mipmaps for the texture, and the default magnification/minification filters weren't set to either GL_LINEAR, or GL_NEAREST. Boom! no textures. Same thing happens with 2d textures.

    Read the article

  • ffmpeg: Could not find codec parameters for stream 0 (Video: h264) unspecified size

    - by dempap
    I try to convert a video from .raw to .mp4. For this reason I did download, build and install both x264 and ffmpeg. However, command: ffmpeg -f h264 -i output.raw -vcodec copy output.mp4 fails with error (shown in picture below). Is there any way to fix this? Commands I also run: 1 root@beagleboard:/# v4l2-ctl --list-formats ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Pixel Format: 'YUYV' Name : YUV 4:2:2 (YUYV) Index : 1 Type : Video Capture Pixel Format: 'MJPG' (compressed) Name : MJPEG 2 root@beagleboard:/dev# v4l2-ctl --set-fmt-video=pixelformat=0

    Read the article

  • How to change the X-Windows default border width for all window frames in Ubuntu using Gnome 2.28

    - by Heston T. Holtmann
    Way back from Windows 3.x days to the latest 64bit Windows 7 (classic/standard theme).. there is a way to make the window edge border wider then 1 pixel... I often use 3 to 5 pixel to make it easy to grab on hi-resolutions displays and hi DPI monitors. There doesn't seem to be an easy or obvious way to do this with the Gnome X-Windowing system? Does any one know how?

    Read the article

  • How to crop the UIImage?

    - by Rajendra Bhole
    Hi, I develop an application in which i process the image using its pixels but in that image processing it takes a lot of time. Therefore i want to crop UIImage (Only middle part of image i.e. removing/croping bordered part of image).I have the develop code are, - (NSInteger) processImage1: (UIImage*) image { CGFloat width = image.size.width; CGFloat height = image.size.height; struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel)); if (pixels != nil) { // Create a new bitmap CGContextRef context = CGBitmapContextCreate( (void*) pixels, image.size.width, image.size.height, 8, image.size.width * 4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast ); if (context != NULL) { // Draw the image in the bitmap CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage); NSUInteger numberOfPixels = image.size.width * image.size.height; NSMutableArray *numberOfPixelsArray = [[[NSMutableArray alloc] initWithCapacity:numberOfPixelsArray] autorelease]; } How i take(croping outside bordered) the middle part of UIImage?????????

    Read the article

  • iPhone: Changing CGImageAlphaInfo of CGImage

    - by TechZen
    I have a PNG image that has an unsupported bitmap graphics context pixel format. Whenever I attempt to resize the image, CGBitmapContextCreate() chokes on the unsupported format (Error formatted for easy reading): CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component colorspace; kCGImageAlphaLast; 1344 bytes/row. The list of supported pixel formats definitely does not support this combination. It appears I need to redraw the image and move the alpha channel information to kCGImageAlphaPremultipliedFirst or kCGImageAlphaPremultipliedLast. I have no idea how to go about doing this. There is nothing unusual about the PNG file and it isn't corrupted. It works in all other context just fine. I encountered this error just by chance but obviously my users might have similarly formatted files so I will have to check my app's imported images and correct for this problem.

    Read the article

  • How to access pixels of an NSBitmapImageRep?

    - by Paperflyer
    I have an NSBitmapImageRep that is created like this: NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:waveformSize.width pixelsHigh:waveformSize.height bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:YES colorSpaceName:NSCalibratedRGBColorSpace bytesPerRow:0 bitsPerPixel:0]; Now I want to access the pixel data so I get a pointer to the pixel planes using unsigned char *bitmapData; [imageRep getBitmapDataPlanes:&bitmapData]; According to the Documentation this returns a C array of five character pointers. But how can it do that? since the type of the argument is unsigned char **, it can only return an array of chars, but not an array of char pointers. So, this leaves me wondering how to access the individual pixels. Do you have an idea how to do that? (I know there is the method – setColor:atX:y:, but it seems to be pretty slow if invoked for every single pixel of a big bitmap.)

    Read the article

  • Unreachable code detected by using const variables

    - by Anton Roth
    I have following code: private const FlyCapture2Managed.PixelFormat f7PF = FlyCapture2Managed.PixelFormat.PixelFormatMono16; public PGRCamera(ExamForm input, bool red, int flags, int drawWidth, int drawHeight) { if (f7PF == FlyCapture2Managed.PixelFormat.PixelFormatMono8) { bpp = 8; // unreachable warning } else if (f7PF == FlyCapture2Managed.PixelFormat.PixelFormatMono16){ bpp = 16; } else { MessageBox.Show("Camera misconfigured"); // unreachable warning } } I understand that this code is unreachable, but I don't want that message to appear, since it's a configuration on compilation which just needs a change in the constant to test different settings, and the bits per pixel (bpp) change depending on the pixel format. Is there a good way to have just one variable being constant, deriving the other from it, but not resulting in an unreachable code warning? Note that I need both values, on start of the camera it needs to be configured to the proper Pixel Format, and my image understanding code needs to know how many bits the image is in. So, is there a good workaround, or do I just live with this warning?

    Read the article

  • Resizing an Binarized image in C#

    - by mouthpiec
    Hi, I have the following code to resize a Binarized image (hence pixel value is 0[black] or 255[white]) with the following code Bitmap ResizedCharImage = new Bitmap(newwidth, newheight); using (Graphics g = Graphics.FromImage((Image)ResizedCharImage)) { g.CompositingQuality = CompositingQuality.HighQuality; g.InterpolationMode = InterpolationMode.HighQualityBilinear; g.SmoothingMode = SmoothingMode.HighQuality; g.PixelOffsetMode = PixelOffsetMode.HighQuality; g.DrawImage(CharBitmap, new Rectangle(0, 0, newwidth, newheight), new Rectangle(0, 0, CharBitmap.Width, CharBitmap.Height), GraphicsUnit.Pixel); } The problem that I am having is that when resizing (i am enlarging the image) some of the pixel values become 254, 253, 1, 2 etc. I need that this do not occur. Is this possible, maybe by changing one of the Graphins properties?

    Read the article

  • Find the closest vector

    - by Alexey Lebedev
    Hello! Recently I wrote the algorithm to quantize an RGB image. Every pixel is represented by an (R,G,B) vector, and quantization codebook is a couple of 3-dimensional vectors. Every pixel of the image needs to be mapped to (say, "replaced by") the codebook pixel closest in terms of euclidean distance (more exactly, squared euclidean). I did it as follows: class EuclideanMetric(DistanceMetric): def __call__(self, x, y): d = x - y return sqrt(sum(d * d, -1)) class Quantizer(object): def __init__(self, codebook, distanceMetric = EuclideanMetric()): self._codebook = codebook self._distMetric = distanceMetric def quantize(self, imageArray): quantizedRaster = zeros(imageArray.shape) X = quantizedRaster.shape[0] Y = quantizedRaster.shape[1] for i in xrange(0, X): print i for j in xrange(0, Y): dist = self._distMetric(imageArray[i,j], self._codebook) code = argmin(dist) quantizedRaster[i,j] = self._codebook[code] return quantizedRaster ...and it works awfully, almost 800 seconds on my Pentium Core Duo 2.2 GHz, 4 Gigs of memory and an image of 2600*2700 pixels:( Is there a way to somewhat optimize this? Maybe the other algorithm or some Python-specific optimizations.

    Read the article

  • How to compare a memory bits in C++?

    - by Trunet
    Hi, I need help with a memory bit comparison function. I bought a LED Matrix here with 4 x HT1632C chips and I'm using it on my arduino mega2560. There're no code available for this chipset(it's not the same as HT1632) and I'm writing on my own. I have a plot function that get x,y coordinates and a color and that pixel turn on. Only this is working perfectly. But I need more performance on my display so I tried to make a shadowRam variable that is a "copy" of my device memory. Before I plot anything on display it checks on shadowRam to see if it's really necessary to change that pixel. When I enabled this(getShadowRam) on plot function my display has some, just SOME(like 3 or 4 on entire display) ghost pixels(pixels that is not supposed to be turned on). If I just comment the prev_color if's on my plot function it works perfectly. Also, I'm cleaning my shadowRam array setting all matrix to zero. variables: #define BLACK 0 #define GREEN 1 #define RED 2 #define ORANGE 3 #define CHIP_MAX 8 byte shadowRam[63][CHIP_MAX-1] = {0}; getShadowRam function: byte HT1632C::getShadowRam(byte x, byte y) { byte addr, bitval, nChip; if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } bitval = 8>>(y&3); x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); if ((shadowRam[addr][nChip-1] & bitval) && (shadowRam[addr+32][nChip-1] & bitval)) { return ORANGE; } else if (shadowRam[addr][nChip-1] & bitval) { return GREEN; } else if (shadowRam[addr+32][nChip-1] & bitval) { return RED; } else { return BLACK; } } plot function: void HT1632C::plot (int x, int y, int color) { if (x<0 || x>X_MAX || y<0 || y>Y_MAX) return; if (color != BLACK && color != GREEN && color != RED && color != ORANGE) return; char addr, bitval; byte nChip; byte prev_color = HT1632C::getShadowRam(x,y); bitval = 8>>(y&3); if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); switch(color) { case BLACK: if (prev_color != BLACK) { // compare with memory to only set if pixel is other color // clear the bit in both planes; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case GREEN: if (prev_color != GREEN) { // compare with memory to only set if pixel is other color // set the bit in the green plane and clear the bit in the red plane; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case RED: if (prev_color != RED) { // compare with memory to only set if pixel is other color // clear the bit in green plane and set the bit in the red plane; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case ORANGE: if (prev_color != ORANGE) { // compare with memory to only set if pixel is other color // set the bit in both the green and red planes; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; } } If helps: The datasheet of board I'm using. On page 7 has the memory mapping I'm using. Also, I have a video of display working.

    Read the article

  • UITableViewCell and strange behaviour in grouped UITableView

    - by evangelion2100
    I'm working on a grouped UITableView, with 4 sections with one row per section, and have a strange behaviour with the cells. The cells are plain UITableViewCells, but the height of the cells are around 60 - 80 pixel. Now the tableview renders the cells correct with round corners, but when I select the cells, they appear blue and recangle. I don't know why the cells behave like this, because I have another grouped UITableView with custom cells and 88 pixel height and those cells work like they should. If I change the height to the default height of 44 pixel, the cells behave like the should. Does anyone know about this behaviour and what the cause is? Like I mentioned, I don't do any fancy stuff I use default UITableViewCells in a static, grouped UITableView with 4 sections with 1 row in each section. evangelion2100

    Read the article

  • The easiest way to draw an image?

    - by Benno
    Assume you want to read an image file in a common file format from the hard drive, change the color of one pixel, and display the resulting image to the screen, in C++. Which (open-source) libraries would you recommend to accomplish the above with the least amount of code? Alternatively, which libraries would do the above in the most elegant way possible? A bit of background: I have been reading a lot of computer graphics literature recently, and there are lots of relatively easy, pixel-based algorithms which I'd like to implement. However, while the algorithm itself would usually be straightforward to implement, the necessary amount of frame-work to manipulate an image on a per-pixel basis and display the result stopped me from doing it.

    Read the article

  • How can I modify an Android bitmap in C++ (JNI/NDK) so that I can use on the Java side?

    - by HardCoder
    I call a C++ function over JNI and pass a RGBA_8888 bitmap, lock it, change the values, unlock it, return and then display it in Java with this C++ code: AndroidBitmap_getInfo(env, map, &info) < 0); AndroidBitmap_lockPixels(env, map, (void**)&pixel); for(i=info.width*info.height-1;i>=0;i--) { pixel[i] = 0xf1f1f1f1; } AndroidBitmap_unlockPixels(env, map); The problem I have is that the bitmaps looks not as I expect it and the pixel values (verified with getPixel) are not the same when I check them in Java from what I set them in C++. When I set the bitmap values to 0xffffffff I get the correct value in Java, but for many others I don't. 0xf1f1f1f1 for example turns into 0xF1FFFFFF. What do I have to do to make it work ? PS: I am using Android 2.3.4

    Read the article

  • Image processing custom filter 7 by 7

    - by ladiesMan217
    Lets say I have a 7 by 7 neighborhood around a pixel that looks like this 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 and I wanna filter the above by replacing the pixel p by the average of those pixels whose value lie in the range -10<=p_value <=10. I am new to image processing and I think in this case p_value is 25 and around 25 that are many pixel values in that range but don't exactly know to construct a convolution filter out of it.

    Read the article

  • How would I get an img element to render under a background-image in CSS.

    - by Nat Ryall
    Basically, I am trying to put a semi-transparent div over an image to serve as a background for text for a slideshow. Unfortunately, the browser seems intent on always rendering the img over the background-image. Is there any way to fix this? Here is my current CSS for the semi-transparent div: #slideshow .info { height: 80px; margin-top: -80px; background: url(../../images/slideshow-info-pixel.png) repeat; } ... with slideshow-info-pixel.png being a single pixel, 50% opacity, PNG 24. I have so far tried z-index and the CSS must be compatible with IE6.

    Read the article

  • How t o make a magnifying glass On a picture?

    - by pengwang
    hello I want to make a magnifying glass on a picture so that the part of picture is extended ? in other words i want to find a easy method to create the illusion of a magnifying glass,I know ShapeDrawable and BitmapShader(Bitmap, Shader.TileMode.REPEAT, Shader.TileMode.REPEAT) is useful,but I donot konw how tu du it. AS http://i3.6.cn/cvbnm/72/60/36/73dfcc8862020e9ed366a55e72e88883.jpg could you give me some code or sample? thank you

    Read the article

  • What's the best way to draw a fullscreen quad in OpenGL 3.2?

    - by Phineas
    I'm doing ray casting in the fragment shader. I can think of a couple ways to draw a fullscreen quad for this purpose. Either draw a quad in clip space with the projection matrix set to the identity matrix, or use the geometry shader to turn a point into a triangle strip. The former uses immediate mode, deprecated in OpenGL 3.2. The latter I use out of novelty, but it still uses immediate mode to draw a point.

    Read the article

  • 2 Shaders using the same vertex data

    - by Fonix
    So im having problems rendering using 2 different shaders. Im currently rendering shapes that represent dice, what i want is if the dice is selected by the user, it draws an outline by drawing the dice completely red and slightly scaled up, then render the proper dice over it. At the moment some of the dice, for some reason, render the wrong dice for the outline, but the right one for the proper foreground dice. Im wondering if they aren't getting their vertex data mixed up somehow. Im not sure if doing something like this is even allowed in openGL: glGenBuffers(1, &_vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer); glBufferData(GL_ARRAY_BUFFER, numVertices*sizeof(GLfloat), vertices, GL_STATIC_DRAW); glEnableVertexAttribArray(effect->vertCoord); glVertexAttribPointer(effect->vertCoord, 3, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArray(effect->toon_vertCoord); glVertexAttribPointer(effect->toon_vertCoord, 3, GL_FLOAT, GL_FALSE, 0, 0); im trying to bind the vertex data to 2 different shaders here when i load my first shader i have: vertCoord = glGetAttribLocation(TexAndLighting, "position"); and the other shader has: toon_vertCoord = glGetAttribLocation(Toon, "position"); if I use the shaders independently of each other they work fine, but when i try to render both one on top of the other they get the model mixed up some times. here is how my draw function looks: - (void) draw { [EAGLContext setCurrentContext:context]; glBindVertexArrayOES(_vertexArray); effect->modelViewMatrix = mvm; effect->numberColour = GLKVector4Make(numbers[colorSelected].r, numbers[colorSelected].g, numbers[colorSelected].b, 1); effect->faceColour = GLKVector4Make(faceColors[colorSelected].r, faceColors[colorSelected].g, faceColors[colorSelected].b, 1); if(selected){ [effect drawOutline]; //this function prepares the shader glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, 0); } [effect prepareToDraw]; //same with this one glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, 0); } this is what it looks like, as you can see most of the outlines are using the wrong dice, or none at all: links to full code: http://pastebin.com/yDKb3wrD Dice.mm //rendering stuff http://pastebin.com/eBK0pzrK Effects.mm //shader stuff http://pastebin.com/5LtDAk8J //my shaders, shouldn't be anything to do with them though TL;DR: trying to use 2 different shaders that use the same vertex data, but its getting the models mixed up when rendering using both at the same time, well thats what i think is going wrong, quite stumped actually.

    Read the article

  • Unable to get Stencil Buffer to work in iOS 4+ (5.0 works fine). [OpenGL ES 2.0]

    - by MurderDev
    So I am trying to use a stencil buffer in iOS for masking/clipping purposes. Do you guys have any idea why this code may not work? This is everything I have associated with Stencils. On iOS 4 I get a black screen. On iOS 5 I get exactly what I expect. The transparent areas of the image I drew in the stencil are the only areas being drawn later. Code is below. This is where I setup the frameBuffer, depth and stencil. In iOS the depth and stencil are combined. -(void)setupDepthBuffer { glGenRenderbuffers(1, &depthRenderBuffer); glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8_OES, self.frame.size.width * [[UIScreen mainScreen] scale], self.frame.size.height * [[UIScreen mainScreen] scale]); } -(void)setupFrameBuffer { glGenFramebuffers(1, &frameBuffer); glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer); // Check the FBO. if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) { NSLog(@"Failure with framebuffer generation: %d", glCheckFramebufferStatus(GL_FRAMEBUFFER)); } } This is how I am setting up and drawing the stencil. (Shader code below.) glEnable(GL_STENCIL_TEST); glDisable(GL_DEPTH_TEST); glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); glDepthMask(GL_FALSE); glStencilFunc(GL_ALWAYS, 1, -1); glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE); glColorMask(0, 0, 0, 0); glClear(GL_STENCIL_BUFFER_BIT); machineForeground.shader = [StencilEffect sharedInstance]; [machineForeground draw]; machineForeground.shader = [BasicEffect sharedInstance]; glDisable(GL_STENCIL_TEST); glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); glDepthMask(GL_TRUE); Here is where I am using the stencil. glEnable(GL_STENCIL_TEST); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); glStencilFunc(GL_EQUAL, 1, -1); ...Draw Stuff here glDisable(GL_STENCIL_TEST); Finally here is my fragment shader. varying lowp vec2 TexCoordOut; uniform sampler2D Texture; void main(void) { lowp vec4 color = texture2D(Texture, TexCoordOut); if(color.a < 0.1) gl_FragColor = color; else discard; }

    Read the article

  • Expensive HDMI Cables Make No Difference

    - by Jason Fitzpatrick
    While we’re no strangers to spreading the news that expensive HDMI cables are a ripoff, we’re happy to share yet another study that shows there’s zero difference between a $5 cable and a $95 one. Over at the British hardware review site Expert Reviews, they subjected a wide selection of HDMI cables to extensive tests in a bid to produce the end-all examination of whether or not a premium HDMI cable could actually produce a better signal. They used capture cards, pixel-by-pixel comparison of output, and other techniques to pick over individual frames until they ultimately reached the same conclusion everyone outside of the Monster sales staff had already reached: you’re getting absolutely no benefit to spending $100 on cable that can be had for under five bucks. Hit up the link below to read over their methodology. Expensive Cables Make Absolutely No Difference [via Geek News Central] HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • GLSL Bokeh using Quads and Textures

    - by Notoriousaur
    I'm trying to create a depth of field effect with bokeh sprites in GLSL. Specifically, what i would like to do is, for each pixel: See if the pixel is out of the focal range If it is, draw a quad and apply a texture to provide a bokeh sprite. This kind of implementation is seen in the Unreal Engine and by Matt Pettineo, however, both implementations are in DX11 and I'm using OpenGL. I'm a bit stuck on the drawing a quad and applying a texture bit. Does anyone know how I can do this, or provide any relevant links as to how I can do this? Thanks

    Read the article

  • How do I set up nvidia graphics adapter to put out 1080p, it seems to be using interlace mode>

    - by keepitsimpleengineer
    After upgrading to 12.04, my mythbuntu client/server seems to be running in 1080i, the clue comes from: [ 1176.117] (II) NVIDIA(0): Setting mode "1920x1080_60i" [ 1231.340] (II) NVIDIA(0): Setting mode "DFP-1:1920x1080_60@1920x1080+0+0" This is from Xorg.0.log. This whole thing started from video tearing when watching Mythtv recordings. It didn't happen in 10.10. Should I use "TVStandard" "HD1080p" in the screen section since this is a dedicated HTPC? It only connects to an HDTV (1080p) via hdmi. Here is the current xorg.conf file: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 270.29 (buildd@allspice) Fri Feb 25 14:42:07 UTC 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Keyboard0" "CoreKeyboard" # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" FontPath "unix/:7100" EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Mouse0" # Driver "mouse" # Option "Protocol" "auto" # Option "Device" "/dev/psaux" # Option "Emulate3Buttons" "no" # Option "ZAxisMapping" "4 5" #EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Keyboard0" # Driver "kbd" #EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "SAMSUNG" HorizSync 26.0 - 81.0 VertRefresh 24.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 240" Option "TripleBuffer" "1" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection After a little digging, the question changes slightly, to wit... Per Chapter 19 of nvidia README... "If the EDID for the display device reported a preferred mode timing, and that mode timing is considered a valid mode, then that mode is used as the "nvidia-auto-select" mode." The EDID for my HDMI connected LCD monitor says use first device as preferred. Prefer first detailed timing : Yes Also: (--) NVIDIA(0): EDID maximum pixel clock : 230.0 MHz The list: (from startx -- -verbose 6 ) (--) NVIDIA(0): Detailed Timings: (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 148.50 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1089, 1125 (--) NVIDIA(0): H/V Polarity : +/+ This is the actual mode selected: (from xorg.0.log) (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 74.18 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1094, 1124 (--) NVIDIA(0): H/V Polarity : +/+ (--) NVIDIA(0): Extra : Interlaced (--) NVIDIA(0): CEA Format : 5 So my HTPC is down-converting to 1080i and then the Monitor is up-converting to 1080p How can I fix this, please?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >