Search Results

Search found 1587 results on 64 pages for 'pixel reaper'.

Page 17/64 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Error X3650 when compiling shader in XNA

    - by Saikai
    I'm attempting to convert the XBDEV.NET Mosaic Shader for use in my XNA project and having trouble. The compiler errors out because of the half globals. At first I tried replacing the globals and just writing the variables explicitly in the code, but that garbles the Output. Next I tried replacing all the half with float vars, but that still garbles the resulting Image. I call the effect file from SpriteBatch.Begin(). Is there a way to convert this shader to the new pixel shader conventions? Are there any good tutorials for this topic? Here is the shader file for reference: /*****************************************************************************/ /* File: tiles.fx Details: Modified version of the NVIDIA Composer FX Demo Program 2004 Produces a tiled mosaic effect on the output. Requires: Vertex Shader 1.1 Pixel Shader 2.0 Modified by: [email protected] (www.xbdev.net) */ /*****************************************************************************/ float4 ClearColor : DIFFUSE = { 0.0f, 0.0f, 0.0f, 1.0f}; float ClearDepth = 1.0f; /******************************** TWEAKABLES *********************************/ half NumTiles = 40.0; half Threshhold = 0.15; half3 EdgeColor = {0.7f, 0.7f, 0.7f}; /*****************************************************************************/ texture SceneMap : RENDERCOLORTARGET < float2 ViewportRatio = { 1.0f, 1.0f }; int MIPLEVELS = 1; string format = "X8R8G8B8"; string UIWidget = "None"; >; sampler SceneSampler = sampler_state { texture = <SceneMap>; AddressU = CLAMP; AddressV = CLAMP; MIPFILTER = NONE; MINFILTER = LINEAR; MAGFILTER = LINEAR; }; /***************************** DATA STRUCTS **********************************/ struct vertexInput { half3 Position : POSITION; half3 TexCoord : TEXCOORD0; }; /* data passed from vertex shader to pixel shader */ struct vertexOutput { half4 HPosition : POSITION; half2 UV : TEXCOORD0; }; /******************************* Vertex shader *******************************/ vertexOutput VS_Quad( vertexInput IN) { vertexOutput OUT = (vertexOutput)0; OUT.HPosition = half4(IN.Position, 1); OUT.UV = IN.TexCoord.xy; return OUT; } /********************************** pixel shader *****************************/ half4 tilesPS(vertexOutput IN) : COLOR { half size = 1.0/NumTiles; half2 Pbase = IN.UV - fmod(IN.UV,size.xx); half2 PCenter = Pbase + (size/2.0).xx; half2 st = (IN.UV - Pbase)/size; half4 c1 = (half4)0; half4 c2 = (half4)0; half4 invOff = half4((1-EdgeColor),1); if (st.x > st.y) { c1 = invOff; } half threshholdB = 1.0 - Threshhold; if (st.x > threshholdB) { c2 = c1; } if (st.y > threshholdB) { c2 = c1; } half4 cBottom = c2; c1 = (half4)0; c2 = (half4)0; if (st.x > st.y) { c1 = invOff; } if (st.x < Threshhold) { c2 = c1; } if (st.y < Threshhold) { c2 = c1; } half4 cTop = c2; half4 tileColor = tex2D(SceneSampler,PCenter); half4 result = tileColor + cTop - cBottom; return result; } /*****************************************************************************/ technique tiles { pass p0 { VertexShader = compile vs_1_1 VS_Quad(); ZEnable = false; ZWriteEnable = false; CullMode = None; PixelShader = compile ps_2_0 tilesPS(); } }

    Read the article

  • how to initialize and implement the matrix inside the function in objective-C?

    - by Rajendra Bhole
    Hi, I want to develop an application in which i want to be initialize the matrix for manipulation. The code as follows, struct pixel { Byte r, g, b,a; int count; }; (NSInteger) processImage1: (UIImage*) image { struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel)); if (pixels != nil) { // Create a new bitmap CGContextRef context = CGBitmapContextCreate( (void*) pixels, image.size.width, image.size.height, 8, image.size.width * 4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast ); if (context != NULL) { // Draw the image in the bitmap CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage); NSUInteger numberOfPixels = image.size.width * image.size.height; while (numberOfPixels &gt; 0) { if (pixels->r == 254 || pixels->g == 77 || pixels->b==254) { numberOfRedPixels++; } pixels++; numberOfPixels--; } CGContextRelease(context); } free(pixels); } return 1; } I want to implement the matrix inside the function of - (NSInteger) processImage1: (UIImage*) image {} The matrix should have be row = image.size.width and column = image.size.height.

    Read the article

  • How to Use Calculated Color Values with ColorMatrix?

    - by Otaku
    I am changing color values of each pixel in an image based on a calculation. The problem is that this takes over 5 seconds on my machine with a 1000x1333 image and I'm looking for a way to optimize it to be much faster. I think ColorMatrix may be an option, but I'm having a difficult time figure out how I would get a set of pixel RGB values, use that to calculate and then set the new pixel value. I can see how this can be done if I was just modifying (multiplying, subtracting, etc.) the original value with ColorMatrix, but now how I can use the pixels returned value to use it to calculate and new value. For example: Sub DarkenPicture() Dim clrTestFolderPath = "C:\Users\Me\Desktop\ColorTest\" Dim originalPicture = "original.jpg" Dim Luminance As Single Dim bitmapOriginal As Bitmap = Image.FromFile(clrTestFolderPath + originalPicture) Dim Clr As Color Dim newR As Byte Dim newG As Byte Dim newB As Byte For x = 0 To bitmapOriginal.Width - 1 For y = 0 To bitmapOriginal.Height - 1 Clr = bitmapOriginal.GetPixel(x, y) Luminance = ((0.21 * (Clr.R) + (0.72 * (Clr.G)) + (0.07 * (Clr.B))/ 255 newR = Clr.R * Luminance newG = Clr.G * Luminance newB = Clr.B * Luminance bitmapOriginal.SetPixel(x, y, Color.FromArgb(newR, newG, newB)) Next Next bitmapOriginal.Save(clrTestFolderPath + "colorized.jpg", ImageFormat.Jpeg) End Sub The Luminance value is the calculated one. I know I can set ColorMatrix's M00, M11, M22 to 0, 0, 0 respectively and then put a new value in M40, M41, M42, but that new value is calculated based of a value multiplication and addition of that pixel's components (((0.21 * (Clr.R) + (0.72 * (Clr.G)) + (0.07 * (Clr.B)) and the result of that - Luminance - is multiplied by the color component). Is this even possible with ColorMatrix?

    Read the article

  • OpenGL pixels drawn with each horizontal pair swapped

    - by Tim Kane
    I'm somewhat new to OpenGL though I'm fairly sure my problem lies in the pixel format being used, or how my texture is being generated... I'm drawing a texture onto a flat 2D quad using a 16bit RGB5_A1 pixel format, though I don't make use of any alpha at this stage. The problem I'm having is that each pair of horizontal pixel values have been swapped. That is... if the pixels positions should be in this order (assume 8x2 image) 0 1 2 3 4 5 6 7 they are instead drawn as 1 0 3 2 5 4 7 6 Or, more clearly from this image (below). Left is what I get... Right is what I should get. . The question is... How have I ended up with this? Is there something wrong with the pixel format? Unlikely since the colours all appear correct, and I would expect all kinds of nasty if it were down to endian-ness. Suggestions greatly appreciated. Update: Turns out the problem was in my source renderer. Interestingly, I've avoided the problem entirely by using 32-bit textures (haven't tried 24-bit at this point).

    Read the article

  • "Find all tiles connected to this one" project

    - by Omega
    Remember MS Paint? The bucket tool? If you used it and clicked on a pixel, all pixels connected to this pixel that are the same are affected. The theory is, I suppose, to check if there is any pixel adjacent to the selected one. If such pixel is the same type as the selected one, check for more adjacent pixels in this one, and so on. I want to implement something similar in VB.NET. Basically I have a 2D array map which represents the map. Let's assume there are only two types of tile: 0 and 1. Now, I got pretty much everything ready: I got my 2d map and I can tell which tile is clicked and tell what array indexes are the ones that represent such tile. Now for the "painting" process. Whenever I think about it, I can't figure a convenient way to execute such iteration. Can someone help me choosing a correct design/way/tip to achieve this?

    Read the article

  • cocos2d - how to draw a bottle sprite with dynamically changing water level

    - by Oliver
    I am trying to draw a (2d) sprite in cocos2d showing a bottle. The bottle shall be able to have a dynamic water level (i.e. the amount of water in the bottle can change over the lifetime of the sprite). I am wondering how to do this. I currently have a PNG file of the empty bottle. I adjusted the alpha channel of that PNG so when rendering the sprite I can draw a blue rectangle and render the bottle texture over it. That will give the impression of the water being inside the bottle. However, the bottle's shape is not a rectangle itself of course, so the water can be seen out of the bounds of the bottle. I can change the bottle image in a way that only the bottle itself is transparent and set the "outside world" to an opaque color & alpha channel value, but that again prevents the "world background" to be visible in that area. I simply don't have a clue how to realize this in a sane manner. Do I really have to read every pixel of the bottle image, identify which pixel is "inside" of the bottle and then draw the water pixel by pixel? There must be an easier way, right? ;) Any best practices for these kinds of tasks? edit: see picture below, to make somewhat clearer, what I am talking about ;) http://i47.tinypic.com/10rqww0.png

    Read the article

  • ffmpeg: Could not find codec parameters for stream 0 (Video: h264) unspecified size

    - by dempap
    I try to convert a video from .raw to .mp4. For this reason I did download, build and install both x264 and ffmpeg. However, command: ffmpeg -f h264 -i output.raw -vcodec copy output.mp4 fails with error (shown in picture below). Is there any way to fix this? Commands I also run: 1 root@beagleboard:/# v4l2-ctl --list-formats ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Pixel Format: 'YUYV' Name : YUV 4:2:2 (YUYV) Index : 1 Type : Video Capture Pixel Format: 'MJPG' (compressed) Name : MJPEG 2 root@beagleboard:/dev# v4l2-ctl --set-fmt-video=pixelformat=0

    Read the article

  • How to change the X-Windows default border width for all window frames in Ubuntu using Gnome 2.28

    - by Heston T. Holtmann
    Way back from Windows 3.x days to the latest 64bit Windows 7 (classic/standard theme).. there is a way to make the window edge border wider then 1 pixel... I often use 3 to 5 pixel to make it easy to grab on hi-resolutions displays and hi DPI monitors. There doesn't seem to be an easy or obvious way to do this with the Gnome X-Windowing system? Does any one know how?

    Read the article

  • How to crop the UIImage?

    - by Rajendra Bhole
    Hi, I develop an application in which i process the image using its pixels but in that image processing it takes a lot of time. Therefore i want to crop UIImage (Only middle part of image i.e. removing/croping bordered part of image).I have the develop code are, - (NSInteger) processImage1: (UIImage*) image { CGFloat width = image.size.width; CGFloat height = image.size.height; struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel)); if (pixels != nil) { // Create a new bitmap CGContextRef context = CGBitmapContextCreate( (void*) pixels, image.size.width, image.size.height, 8, image.size.width * 4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast ); if (context != NULL) { // Draw the image in the bitmap CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage); NSUInteger numberOfPixels = image.size.width * image.size.height; NSMutableArray *numberOfPixelsArray = [[[NSMutableArray alloc] initWithCapacity:numberOfPixelsArray] autorelease]; } How i take(croping outside bordered) the middle part of UIImage?????????

    Read the article

  • iPhone: Changing CGImageAlphaInfo of CGImage

    - by TechZen
    I have a PNG image that has an unsupported bitmap graphics context pixel format. Whenever I attempt to resize the image, CGBitmapContextCreate() chokes on the unsupported format (Error formatted for easy reading): CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component colorspace; kCGImageAlphaLast; 1344 bytes/row. The list of supported pixel formats definitely does not support this combination. It appears I need to redraw the image and move the alpha channel information to kCGImageAlphaPremultipliedFirst or kCGImageAlphaPremultipliedLast. I have no idea how to go about doing this. There is nothing unusual about the PNG file and it isn't corrupted. It works in all other context just fine. I encountered this error just by chance but obviously my users might have similarly formatted files so I will have to check my app's imported images and correct for this problem.

    Read the article

  • How to access pixels of an NSBitmapImageRep?

    - by Paperflyer
    I have an NSBitmapImageRep that is created like this: NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:waveformSize.width pixelsHigh:waveformSize.height bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:YES colorSpaceName:NSCalibratedRGBColorSpace bytesPerRow:0 bitsPerPixel:0]; Now I want to access the pixel data so I get a pointer to the pixel planes using unsigned char *bitmapData; [imageRep getBitmapDataPlanes:&bitmapData]; According to the Documentation this returns a C array of five character pointers. But how can it do that? since the type of the argument is unsigned char **, it can only return an array of chars, but not an array of char pointers. So, this leaves me wondering how to access the individual pixels. Do you have an idea how to do that? (I know there is the method – setColor:atX:y:, but it seems to be pretty slow if invoked for every single pixel of a big bitmap.)

    Read the article

  • Unreachable code detected by using const variables

    - by Anton Roth
    I have following code: private const FlyCapture2Managed.PixelFormat f7PF = FlyCapture2Managed.PixelFormat.PixelFormatMono16; public PGRCamera(ExamForm input, bool red, int flags, int drawWidth, int drawHeight) { if (f7PF == FlyCapture2Managed.PixelFormat.PixelFormatMono8) { bpp = 8; // unreachable warning } else if (f7PF == FlyCapture2Managed.PixelFormat.PixelFormatMono16){ bpp = 16; } else { MessageBox.Show("Camera misconfigured"); // unreachable warning } } I understand that this code is unreachable, but I don't want that message to appear, since it's a configuration on compilation which just needs a change in the constant to test different settings, and the bits per pixel (bpp) change depending on the pixel format. Is there a good way to have just one variable being constant, deriving the other from it, but not resulting in an unreachable code warning? Note that I need both values, on start of the camera it needs to be configured to the proper Pixel Format, and my image understanding code needs to know how many bits the image is in. So, is there a good workaround, or do I just live with this warning?

    Read the article

  • Resizing an Binarized image in C#

    - by mouthpiec
    Hi, I have the following code to resize a Binarized image (hence pixel value is 0[black] or 255[white]) with the following code Bitmap ResizedCharImage = new Bitmap(newwidth, newheight); using (Graphics g = Graphics.FromImage((Image)ResizedCharImage)) { g.CompositingQuality = CompositingQuality.HighQuality; g.InterpolationMode = InterpolationMode.HighQualityBilinear; g.SmoothingMode = SmoothingMode.HighQuality; g.PixelOffsetMode = PixelOffsetMode.HighQuality; g.DrawImage(CharBitmap, new Rectangle(0, 0, newwidth, newheight), new Rectangle(0, 0, CharBitmap.Width, CharBitmap.Height), GraphicsUnit.Pixel); } The problem that I am having is that when resizing (i am enlarging the image) some of the pixel values become 254, 253, 1, 2 etc. I need that this do not occur. Is this possible, maybe by changing one of the Graphins properties?

    Read the article

  • Find the closest vector

    - by Alexey Lebedev
    Hello! Recently I wrote the algorithm to quantize an RGB image. Every pixel is represented by an (R,G,B) vector, and quantization codebook is a couple of 3-dimensional vectors. Every pixel of the image needs to be mapped to (say, "replaced by") the codebook pixel closest in terms of euclidean distance (more exactly, squared euclidean). I did it as follows: class EuclideanMetric(DistanceMetric): def __call__(self, x, y): d = x - y return sqrt(sum(d * d, -1)) class Quantizer(object): def __init__(self, codebook, distanceMetric = EuclideanMetric()): self._codebook = codebook self._distMetric = distanceMetric def quantize(self, imageArray): quantizedRaster = zeros(imageArray.shape) X = quantizedRaster.shape[0] Y = quantizedRaster.shape[1] for i in xrange(0, X): print i for j in xrange(0, Y): dist = self._distMetric(imageArray[i,j], self._codebook) code = argmin(dist) quantizedRaster[i,j] = self._codebook[code] return quantizedRaster ...and it works awfully, almost 800 seconds on my Pentium Core Duo 2.2 GHz, 4 Gigs of memory and an image of 2600*2700 pixels:( Is there a way to somewhat optimize this? Maybe the other algorithm or some Python-specific optimizations.

    Read the article

  • How to compare a memory bits in C++?

    - by Trunet
    Hi, I need help with a memory bit comparison function. I bought a LED Matrix here with 4 x HT1632C chips and I'm using it on my arduino mega2560. There're no code available for this chipset(it's not the same as HT1632) and I'm writing on my own. I have a plot function that get x,y coordinates and a color and that pixel turn on. Only this is working perfectly. But I need more performance on my display so I tried to make a shadowRam variable that is a "copy" of my device memory. Before I plot anything on display it checks on shadowRam to see if it's really necessary to change that pixel. When I enabled this(getShadowRam) on plot function my display has some, just SOME(like 3 or 4 on entire display) ghost pixels(pixels that is not supposed to be turned on). If I just comment the prev_color if's on my plot function it works perfectly. Also, I'm cleaning my shadowRam array setting all matrix to zero. variables: #define BLACK 0 #define GREEN 1 #define RED 2 #define ORANGE 3 #define CHIP_MAX 8 byte shadowRam[63][CHIP_MAX-1] = {0}; getShadowRam function: byte HT1632C::getShadowRam(byte x, byte y) { byte addr, bitval, nChip; if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } bitval = 8>>(y&3); x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); if ((shadowRam[addr][nChip-1] & bitval) && (shadowRam[addr+32][nChip-1] & bitval)) { return ORANGE; } else if (shadowRam[addr][nChip-1] & bitval) { return GREEN; } else if (shadowRam[addr+32][nChip-1] & bitval) { return RED; } else { return BLACK; } } plot function: void HT1632C::plot (int x, int y, int color) { if (x<0 || x>X_MAX || y<0 || y>Y_MAX) return; if (color != BLACK && color != GREEN && color != RED && color != ORANGE) return; char addr, bitval; byte nChip; byte prev_color = HT1632C::getShadowRam(x,y); bitval = 8>>(y&3); if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); switch(color) { case BLACK: if (prev_color != BLACK) { // compare with memory to only set if pixel is other color // clear the bit in both planes; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case GREEN: if (prev_color != GREEN) { // compare with memory to only set if pixel is other color // set the bit in the green plane and clear the bit in the red plane; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case RED: if (prev_color != RED) { // compare with memory to only set if pixel is other color // clear the bit in green plane and set the bit in the red plane; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case ORANGE: if (prev_color != ORANGE) { // compare with memory to only set if pixel is other color // set the bit in both the green and red planes; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; } } If helps: The datasheet of board I'm using. On page 7 has the memory mapping I'm using. Also, I have a video of display working.

    Read the article

  • UITableViewCell and strange behaviour in grouped UITableView

    - by evangelion2100
    I'm working on a grouped UITableView, with 4 sections with one row per section, and have a strange behaviour with the cells. The cells are plain UITableViewCells, but the height of the cells are around 60 - 80 pixel. Now the tableview renders the cells correct with round corners, but when I select the cells, they appear blue and recangle. I don't know why the cells behave like this, because I have another grouped UITableView with custom cells and 88 pixel height and those cells work like they should. If I change the height to the default height of 44 pixel, the cells behave like the should. Does anyone know about this behaviour and what the cause is? Like I mentioned, I don't do any fancy stuff I use default UITableViewCells in a static, grouped UITableView with 4 sections with 1 row in each section. evangelion2100

    Read the article

  • The easiest way to draw an image?

    - by Benno
    Assume you want to read an image file in a common file format from the hard drive, change the color of one pixel, and display the resulting image to the screen, in C++. Which (open-source) libraries would you recommend to accomplish the above with the least amount of code? Alternatively, which libraries would do the above in the most elegant way possible? A bit of background: I have been reading a lot of computer graphics literature recently, and there are lots of relatively easy, pixel-based algorithms which I'd like to implement. However, while the algorithm itself would usually be straightforward to implement, the necessary amount of frame-work to manipulate an image on a per-pixel basis and display the result stopped me from doing it.

    Read the article

  • How can I modify an Android bitmap in C++ (JNI/NDK) so that I can use on the Java side?

    - by HardCoder
    I call a C++ function over JNI and pass a RGBA_8888 bitmap, lock it, change the values, unlock it, return and then display it in Java with this C++ code: AndroidBitmap_getInfo(env, map, &info) < 0); AndroidBitmap_lockPixels(env, map, (void**)&pixel); for(i=info.width*info.height-1;i>=0;i--) { pixel[i] = 0xf1f1f1f1; } AndroidBitmap_unlockPixels(env, map); The problem I have is that the bitmaps looks not as I expect it and the pixel values (verified with getPixel) are not the same when I check them in Java from what I set them in C++. When I set the bitmap values to 0xffffffff I get the correct value in Java, but for many others I don't. 0xf1f1f1f1 for example turns into 0xF1FFFFFF. What do I have to do to make it work ? PS: I am using Android 2.3.4

    Read the article

  • Image processing custom filter 7 by 7

    - by ladiesMan217
    Lets say I have a 7 by 7 neighborhood around a pixel that looks like this 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 and I wanna filter the above by replacing the pixel p by the average of those pixels whose value lie in the range -10<=p_value <=10. I am new to image processing and I think in this case p_value is 25 and around 25 that are many pixel values in that range but don't exactly know to construct a convolution filter out of it.

    Read the article

  • How would I get an img element to render under a background-image in CSS.

    - by Nat Ryall
    Basically, I am trying to put a semi-transparent div over an image to serve as a background for text for a slideshow. Unfortunately, the browser seems intent on always rendering the img over the background-image. Is there any way to fix this? Here is my current CSS for the semi-transparent div: #slideshow .info { height: 80px; margin-top: -80px; background: url(../../images/slideshow-info-pixel.png) repeat; } ... with slideshow-info-pixel.png being a single pixel, 50% opacity, PNG 24. I have so far tried z-index and the CSS must be compatible with IE6.

    Read the article

  • How do you create a cbuffer or global variable that is gpu modifiable?

    - by bobobobo
    I'm implementing tonemapping in a pixel shader, for hdr lighting. The vertex shader outputs vertices with colors. I need to find the max color and save it in a global. However when I try and write the global in my hlsl code, //clamp the max color below by this color clamp( maxColor, output.color, float4( 1e6,1e6,1e6,1e6 ) ) ; I see: error X3025: global variables are implicitly constant, enable compatibility mode to allow modification What is the correct way to declare a shader global in d3d11 that the vertex shader can write to, and the pixel shader can read? I realize this is a bit tough since the vertex shaders are supposed to run in parallel, and introducing a shader global that they all write to means a lock..

    Read the article

  • shader coding: calculate screen coordinates of fragment

    - by Jay
    Good morning, I'm new to shader coding and trying to implement some visual effects code in shaders using billboards. (Yes, I couldn't have picked anything harder to start with, but I'm lucky that way) Setup: I have rendered the full screen z depth to an array of floats in a previous pass. In the fragment shader I need the scene depth where the rendered fragment is displayed (to see if it's occluded). I can use tex2d() to get the depth value if I have the screen coordinates of the point being rendered in the fragment shader. Question: In the fragment shader how do you calculate the screen coordinates of the pixel (in the range 0-1.0)? Is the position passed to the fragment shader a pixel offset? If so, I guess it would be: float2( position.x / screen-width, position.y / screen-height ) Thanks for any help/

    Read the article

  • Using SurfaceFormat.Single and HLSL for GPGPU with XNA

    - by giancarlo todone
    I'm trying to implement a so-called ping-pong technique in XNA; you basically have two RenderTarget2D A and B and at each iteration you use one as texture and the other as target - and vice versa - for a quad rendered through an HLSL pixel shader. step1: A--PS--B step2: B--PS--A step3: A--PS--B ... In my setup, both RenderTargets are SurfaceFormat.Single. In my .fx file, I have a tachnique to do the update, and another to render the "current buffer" to the screen. Before starting the "ping-pong", buffer A is filled with test data with SetData<float>(float[]) function: this seems to work properly, because if I render a quad on the screen through the "Draw" pixel shader, i do see the test data being correctly rendered. However, if i do update buffer B, something does not function proerly and the next rendering to screen will be all black. For debug purposes, i replaced the "Update" HLSL pixel shader with one that should simply copy buffer A into B (or B into A depending on which among "ping" and "pong" phases we are...). From some examples i found on the net, i see that in order to correctly fetch a float value from a texture sampler from HLSL code, i should only need to care for the red channel. So, basically the debug "Update" HLSL function is: float4 ComputePS(float2 inPos : TEXCOORD0) : COLOR0 { float v1 = tex2D(bufSampler, inPos.xy).r; return float4(v1,0,0,1); } which still doesn't work and results in a all-zeroes ouput. Here's the "Draw" function that seems to properly display initial data: float4 DrawPS(float2 inPos : TEXCOORD0) : COLOR0 { float v1 = tex2D(bufSampler, inPos.xy).r; return float4(v1,v1,v1,1); } Now: playing around with HLSL doesn't change anything, so maybe I'm missing something on the c# side of this, so here's the infamous Update() function: _effect.Parameters["bufTexture"].SetValue(buf[_currentBuf]); _graphicsDevice.SetRenderTarget(buf[1 - _currentBuf]); _graphicsDevice.Clear(Color.Black); // probably not needed since RenderTargetUsage is DiscardContents _effect.CurrentTechnique = _computeTechnique; _computeTechnique.Passes[0].Apply(); _quadRender.Render(); _graphicsDevice.SetRenderTarget(null); _currentBuf = 1 - _currentBuf; Any clue?

    Read the article

  • Driver error when using multiple shaders

    - by Jinxi
    I'm using 3 different shaders: a tessellation shader to use the tessellation feature of DirectX11 :) a regular shader to show how it would look without tessellation and a text shader to display debug-info such as FPS, model count etc. All of these shaders are initialized at the beginning. Using the keyboard, I can switch between the tessellation shader and regular shader to render the scene. Additionally, I also want to be able toggle the display of debug-info using the text shader. Since implementing the tessellation shader the text shader doesn't work anymore. When I activate the DebugText (rendered using the text-shader) my screens go black for a while, and Windows displays the following message: Display Driver stopped responding and has recovered This happens with either of the two shaders used to render the scene. Additionally: I can start the application using the regular shader to render the scene and then switch to the tessellation shader. If I try to switch back to the regular shader I get the same error as with the text shader. What am I doing wrong when switching between shaders? What am I doing wrong when displaying text at the same time? What file can I post to help you help me? :) thx P.S. I already checked if my keyinputs interrupt at the wrong time (during render or so..), but that seems to be ok Testing Procedure Regular Shader without text shader Add text shader to Regular Shader by keyinput (works now, I built the text shader back to only vertex and pixel shader) (somthing with the z buffer is stil wrong...) Remove text shader, then change shader to Tessellation Shader by key input Then if I add the Text Shader or switch back to the Regular Shader Switching/Render Shader Here the code snipet from the Renderer.cpp where I choose the Shader according to the boolean "m_useTessellationShader": if(m_useTessellationShader) { // Render the model using the tesselation shader ecResult = m_ShaderManager->renderTessellationShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), worldMatrix, viewMatrix, projectionMatrix, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), (D3DXVECTOR3)m_Camera->getPosition(), TESSELLATION_AMOUNT); } else { // todo: loaded model depends on distance to camera // Render the model using the light shader. ecResult = m_ShaderManager->renderShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), lod_level, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), worldMatrix, viewMatrix, projectionMatrix); } And here the code snipet from the Mesh.cpp where I choose the Typology according to the boolean "useTessellationShader": // RenderBuffers is called from the Render function. The purpose of this function is to set the vertex buffer and index buffer as active on the input assembler in the GPU. Once the GPU has an active vertex buffer it can then use the shader to render that buffer. void Mesh::renderBuffers(ID3D11DeviceContext* deviceContext, bool useTessellationShader) { unsigned int stride; unsigned int offset; // Set vertex buffer stride and offset. stride = sizeof(VertexType); offset = 0; // Set the vertex buffer to active in the input assembler so it can be rendered. deviceContext->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset); // Set the index buffer to active in the input assembler so it can be rendered. deviceContext->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_UINT, 0); // Check which Shader is used to set the appropriate Topology // Set the type of primitive that should be rendered from this vertex buffer, in this case triangles. if(useTessellationShader) { deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_3_CONTROL_POINT_PATCHLIST); }else{ deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); } return; } RenderShader Could there be a problem using sometimes only vertex and pixel shader and after switching using vertex, hull, domain and pixel shader? Here a little overview of my architecture: TextClass: uses font.vs and font.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); RegularShader: uses vertex.vs and pixel.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); TessellationShader: uses tessellation.vs, tessellation.hs, tessellation.ds, tessellation.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-HSSetShader(m_hullShader, NULL, 0); deviceContext-DSSetShader(m_domainShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); ClearState I'd like to switch between 2 shaders and it seems they have different context parameters, right? In clearstate methode it says it resets following params to NULL: I found following in my Direct3D Class: depth-stencil state - m_deviceContext-OMSetDepthStencilState rasterizer state - m_deviceContext-RSSetState(m_rasterState); blend state - m_device-CreateBlendState viewports - m_deviceContext-RSSetViewports(1, &viewport); I found following in every Shader Class: input/output resource slots - deviceContext-PSSetShaderResources shaders - deviceContext-VSSetShader to - deviceContext-PSSetShader input layouts - device-CreateInputLayout sampler state - device-CreateSamplerState These two I didn't understand, where can I find them? predications - ? scissor rectangles - ? Do I need to store them all localy so I can switch between them, because it doesn't feel right to reinitialize the Direct3d and the Shaders by every switch (key input)?!

    Read the article

  • How to code a 4x shader/filter which emulates arcade crt display behavior?

    - by Arthur Wulf White
    I want to write a shader/filer probably in adobe Pixel Bender that will do the best job possible in emulating the fill of an oldskul monochromatic arcade CRT screen. Much like this here: http://filthypants.blogspot.com/2012/07/customizing-cgwgs-crt-pixel-shader.html Here are some attributes I know will exist in this filter: It will take in a low res image 160 x 120 and return a medium res image 640 x 480. It will add scanlines It will blur the color channels to create that color bleeding effect It will distort the shape of the image from a perfect rectangle into a rounder shape. The question is, could you please provide any other attributes that are beneficial to emulating an arcade CRT feel and links and resources on coding these effects. Thanks

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >