Search Results

Search found 1598 results on 64 pages for 'pixel bender'.

Page 20/64 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Use native HBitmap in C# while preserving alpha channel/transparency. Please check this code, it works on my computer...

    - by David
    Let's say I get a HBITMAP object/handle from a native Windows function. I can convert it to a managed bitmap using Bitmap.FromHbitmap(nativeHBitmap), but if the native image has transparency information (alpha channel), it is lost by this conversion. There are a few questions on Stack Overflow regarding this issue. Using information from the first answer of this question (How to draw ARGB bitmap using GDI+?), I wrote a piece of code that I've tried and it works. It basically gets the native HBitmap width, height and the pointer to the location of the pixel data using GetObject and the BITMAP structure, and then calls the managed Bitmap constructor: Bitmap managedBitmap = new Bitmap(bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); As I understand (please correct me if I'm wrong), this does not copy the actual pixel data from the native HBitmap to the managed bitmap, it simply points the managed bitmap to the pixel data from the native HBitmap. And I don't draw the bitmap here on another Graphics (DC) or on another bitmap, to avoid unnecessary memory copying, especially for large bitmaps. I can simply assign this bitmap to a PictureBox control or the the Form BackgroundImage property. And it works, the bitmap is displayed correctly, using transparency. When I no longer use the bitmap, I make sure the BackgroundImage property is no longer pointing to the bitmap, and I dispose both the managed bitmap and the native HBitmap. The Question: Can you tell me if this reasoning and code seems correct. I hope I will not get some unexpected behaviors or errors. And I hope I'm freeing all the memory and objects correctly. private void Example() { IntPtr nativeHBitmap = IntPtr.Zero; /* Get the native HBitmap object from a Windows function here */ // Create the BITMAP structure and get info from our nativeHBitmap NativeMethods.BITMAP bitmapStruct = new NativeMethods.BITMAP(); NativeMethods.GetObjectBitmap(nativeHBitmap, Marshal.SizeOf(bitmapStruct), ref bitmapStruct); // Create the managed bitmap using the pointer to the pixel data of the native HBitmap Bitmap managedBitmap = new Bitmap( bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); // Show the bitmap this.BackgroundImage = managedBitmap; /* Run the program, use the image */ MessageBox.Show("running..."); // When the image is no longer needed, dispose both the managed Bitmap object and the native HBitmap this.BackgroundImage = null; managedBitmap.Dispose(); NativeMethods.DeleteObject(nativeHBitmap); } internal static class NativeMethods { [StructLayout(LayoutKind.Sequential)] public struct BITMAP { public int bmType; public int bmWidth; public int bmHeight; public int bmWidthBytes; public ushort bmPlanes; public ushort bmBitsPixel; public IntPtr bmBits; } [DllImport("gdi32", CharSet = CharSet.Auto, EntryPoint = "GetObject")] public static extern int GetObjectBitmap(IntPtr hObject, int nCount, ref BITMAP lpObject); [DllImport("gdi32.dll")] internal static extern bool DeleteObject(IntPtr hObject); }

    Read the article

  • How can I get image data from QTKit without color or gamma correction in Snow Leopard?

    - by Nick Haddad
    Since Snow Leopard, QTKit is now returning color corrected image data from functions like QTMovies frameImageAtTime:withAttributes:error:. Given an uncompressed AVI file, the same image data is displayed with larger pixel values in Snow Leopard vs. Leopard. Currently I'm using frameImageAtTime to get an NSImage, then ask for the tiffRepresentation of that image. After doing this, pixel values are slightly higher in Snow Leopard. For example, a file with the following pixel value in Leopard: [0 180 0] Now has a pixel value like: [0 192 0] Is there any way to ask a QTMovie for video frames that are not color corrected? Should I be asking for a CGImageRef, CIImage, or CVPixelBufferRef instead? Is there a way to disable color correction altogether prior to reading in the video files? I've attempted to work around this issue by drawing into a NSBitmapImageRep with the NSCalibratedColroSpace, but that only gets my part of the way there: // Create a movie NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys : nsFileName, QTMovieFileNameAttribute, [NSNumber numberWithBool:NO], QTMovieOpenAsyncOKAttribute, [NSNumber numberWithBool:NO], QTMovieLoopsAttribute, [NSNumber numberWithBool:NO], QTMovieLoopsBackAndForthAttribute, (id)nil]; _theMovie = [[QTMovie alloc] initWithAttributes:dict error:&error]; // .... NSMutableDictionary *imageAttributes = [NSMutableDictionary dictionary]; [imageAttributes setObject:QTMovieFrameImageTypeNSImage forKey:QTMovieFrameImageType]; [imageAttributes setObject:[NSArray arrayWithObject:@"NSBitmapImageRep"] forKey: QTMovieFrameImageRepresentationsType]; [imageAttributes setObject:[NSNumber numberWithBool:YES] forKey:QTMovieFrameImageHighQuality]; NSError* err = nil; NSImage* image = (NSImage*)[_theMovie frameImageAtTime:frameTime withAttributes:imageAttributes error:&err]; // copy NSImage into an NSBitmapImageRep (Objective-C) NSBitmapImageRep* bitmap = [[image representations] objectAtIndex:0]; // Draw into a colorspace we know about NSBitmapImageRep *bitmapWhoseFormatIKnow = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:getWidth() pixelsHigh:getHeight() bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSCalibratedRGBColorSpace bitmapFormat:0 bytesPerRow:(getWidth() * 4) bitsPerPixel:32]; [NSGraphicsContext saveGraphicsState]; [NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapWhoseFormatIKnow]]; [bitmap draw]; [NSGraphicsContext restoreGraphicsState]; This does convert back to a 'Non color corrected' colorspace, but the color values NOT are exactly the same as what is stored in the Uncompressed AVI files we are testing with. Also this is much less efficient because it is converting from RGB - "Device RGB" - RGB. Also, I am working in a 64-bit application, so dropping down to the Quicktime-C API is not an option. Thanks for your help.

    Read the article

  • AS3 Transition Manager problem

    - by Mirko
    I am using the TransitionManager class to pixel dissolve an image in an image gallery XML driven. It always stops half way through the animation...I hate Adobe Tween engines, I always used TweenMax without (almost) any problem but I would like to have the pixel dissolve effect. var myTM:TransitionManager = new TransitionManager(container_mc); myTM.addEventListener("allTransitionsOutDone",swapContent); myTM.startTransition({ type:PixelDissolve, direction:Transition.OUT, duration:1,easing:None.easeOut,xSections:200, ySections:200 }); Any suggestion?

    Read the article

  • Performance Optimization for Matrix Rotation

    - by Summer_More_More_Tea
    Hello everyone: I'm now trapped by a performance optimization lab in the book "Computer System from a Programmer's Perspective" described as following: In a N*N matrix M, where N is multiple of 32, the rotate operation can be represented as: Transpose: interchange elements M(i,j) and M(j,i) Exchange rows: Row i is exchanged with row N-1-i A example for matrix rotation(N is 3 instead of 32 for simplicity): ------- ------- |1|2|3| |3|6|9| ------- ------- |4|5|6| after rotate is |2|5|8| ------- ------- |7|8|9| |1|4|7| ------- ------- A naive implementation is: #define RIDX(i,j,n) ((i)*(n)+(j)) void naive_rotate(int dim, pixel *src, pixel *dst) { int i, j; for (i = 0; i < dim; i++) for (j = 0; j < dim; j++) dst[RIDX(dim-1-j, i, dim)] = src[RIDX(i, j, dim)]; } I come up with an idea by inner-loop-unroll. The result is: Code Version Speed Up original 1x unrolled by 2 1.33x unrolled by 4 1.33x unrolled by 8 1.55x unrolled by 16 1.67x unrolled by 32 1.61x I also get a code snippet from pastebin.com that seems can solve this problem: void rotate(int dim, pixel *src, pixel *dst) { int stride = 32; int count = dim >> 5; src += dim - 1; int a1 = count; do { int a2 = dim; do { int a3 = stride; do { *dst++ = *src; src += dim; } while(--a3); src -= dim * stride + 1; dst += dim - stride; } while(--a2); src += dim * (stride + 1); dst -= dim * dim - stride; } while(--a1); } After carefully read the code, I think main idea of this solution is treat 32 rows as a data zone, and perform the rotating operation respectively. Speed up of this version is 1.85x, overwhelming all the loop-unroll version. Here are the questions: In the inner-loop-unroll version, why does increment slow down if the unrolling factor increase, especially change the unrolling factor from 8 to 16, which does not effect the same when switch from 4 to 8? Does the result have some relationship with depth of the CPU pipeline? If the answer is yes, could the degrade of increment reflect pipeline length? What is the probable reason for the optimization of data-zone version? It seems that there is no too much essential difference from the original naive version. EDIT: My test environment is Intel Centrino Duo processor and the verion of gcc is 4.4 Any advice will be highly appreciated! Kind regards!

    Read the article

  • How to tile a 30000 x 6000 image for a 480 x 320 screen?

    - by Horace Ho
    (this is related to another question about implementation on iPhone) I have a large image, size around 30000 (w) x 6000 (h) pixels. You may consider it's like a big map. I assume I need to crop it up into smaller tiles. Questions: what is the tile strategy? Requirements: whole image (though cropped) can be scrolled up/down/left/right by swipes zoom in (up to pixel-to-pixel) out (down to screen-fit-by-height) by the 2-finger operation memory efficiency by lazy loading tiles Thanks!

    Read the article

  • Why do I get rows of zeros in my 2D fft?

    - by Nicholas Pringle
    I am trying to replicate the results from a paper. "Two-dimensional Fourier Transform (2D-FT) in space and time along sections of constant latitude (east-west) and longitude (north-south) were used to characterize the spectrum of the simulated flux variability south of 40degS." - Lenton et al(2006) The figures published show "the log of the variance of the 2D-FT". I have tried to create an array consisting of the seasonal cycle of similar data as well as the noise. I have defined the noise as the original array minus the signal array. Here is the code that I used to plot the 2D-FT of the signal array averaged in latitude: import numpy as np from numpy import ma from matplotlib import pyplot as plt from Scientific.IO.NetCDF import NetCDFFile ### input directory indir = '/home/nicholas/data/' ### get the flux data which is in ### [time(5day ave for 10 years),latitude,longitude] nc = NetCDFFile(indir + 'CFLX_2000_2009.nc','r') cflux_southern_ocean = nc.variables['Cflx'][:,10:50,:] cflux_southern_ocean = ma.masked_values(cflux_southern_ocean,1e+20) # mask land nc.close() cflux = cflux_southern_ocean*1e08 # change units of data from mmol/m^2/s ### create an array that consists of the seasonal signal fro each pixel year_stack = np.split(cflux, 10, axis=0) year_stack = np.array(year_stack) signal_array = np.tile(np.mean(year_stack, axis=0), (10, 1, 1)) signal_array = ma.masked_where(signal_array > 1e20, signal_array) # need to mask ### average the array over latitude(or longitude) signal_time_lon = ma.mean(signal_array, axis=1) ### do a 2D Fourier Transform of the time/space image ft = np.fft.fft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log(mgft) log_mgft= np.log(mgft) Every second row of the ft consists completely of zeros. Why is this? Would it be acceptable to add a randomly small number to the signal to avoid this. signal_time_lon = signal_time_lon + np.random.randint(0,9,size=(730, 182))*1e-05 EDIT: Adding images and clarify meaning The output of rfft2 still appears to be a complex array. Using fftshift shifts the edges of the image to the centre; I still have a power spectrum regardless. I expect that the reason that I get rows of zeros is that I have re-created the timeseries for each pixel. The ft[0, 0] pixel contains the mean of the signal. So the ft[1, 0] corresponds to a sinusoid with one cycle over the entire signal in the rows of the starting image. Here are is the starting image using following code: plt.pcolormesh(signal_time_lon); plt.colorbar(); plt.axis('tight') Here is result using following code: ft = np.fft.rfft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log1p(mgft) plt.pcolormesh(log_ps); plt.colorbar(); plt.axis('tight') It may not be clear in the image but it is only every second row that contains completely zeros. Every tenth pixel (log_ps[10, 0]) is a high value. The other pixels (log_ps[2, 0], log_ps[4, 0] etc) have very low values.

    Read the article

  • Scraping data from Flash (Games)

    - by awegawef
    I saw this video, and I am really curious how it was performed. Does anyone have any ideas? My intuition is that he scraped pixels from the screen (one per 'box'), and then fed that into some program to determine the next move. Is scraping pixel-by-pixel the way to do this, or is there a better way? I am looking to do something similar with either Java or Python. Thanks

    Read the article

  • Using fft2 with reshaping for an RGB filter

    - by Mahmoud Aladdin
    I want to apply a filter on an image, for example, blurring filter [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]. Also, I'd like to use the approach that convolution in Spatial domain is equivalent to multiplication in Frequency domain. So, my algorithm will be like. Load Image. Create Filter. convert both Filter & Image to Frequency domains. multiply both. reconvert the output to Spatial Domain and that should be the required output. The following is the basic code I use, the image is loaded and displayed as cv.cvmat object. Image is a class of my creation, it has a member image which is an object of scipy.matrix and toFrequencyDomain(size = None) uses spf.fftshift(spf.fft2(self.image, size)) where spf is scipy.fftpack and dotMultiply(img) uses scipy.multiply(self.image, image) f = Image.fromMatrix([[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]) lena = Image.fromFile("Test/images/lena.jpg") print lena.image.shape lenaf = lena.toFrequencyDomain(lena.image.shape) ff = f.toFrequencyDomain(lena.image.shape) lenafm = lenaf.dotMultiplyImage(ff) lenaff = lenafm.toTimeDomain() lena.display() lenaff.display() So, the previous code works pretty well, if I told OpenCV to load the image via GRAY_SCALE. However, if I let the image to be loaded in color ... lena.image.shape will be (512, 512, 3) .. so, it gives me an error when using scipy.fttpack.ftt2 saying "When given, Shape and Axes should be of same length". What I tried next was converted my filter to 3-D .. as [[[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]], [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]], [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]] And, not knowing what the axes argument do, I added it with random numbers as (-2, -1, -1), (-1, -1, -2), .. etc. until it gave me the correct filter output shape for the dotMultiply to work. But, of course it wasn't the correct value. Things were totally worse. My final trial, was using fft2 function on each of the components 2-D matrices, and then re-making the 3-D one, using the following code. # Spiltting the 3-D matrix to three 2-D matrices. for i, row in enumerate(self.image): r.append(list()) g.append(list()) b.append(list()) for pixel in row: r[i].append(pixel[0]) g[i].append(pixel[1]) b[i].append(pixel[2]) rfft = spf.fftshift(spf.fft2(r, size)) gfft = spf.fftshift(spf.fft2(g, size)) bfft = spf.fftshift(spf.fft2(b, size)) newImage.image = sp.asarray([[[rfft[i][j], gfft[i][j], bfft[i][j]] for j in xrange(len(rfft[i]))] for i in xrange(len(rfft))] ) return newImage Any help on what I made wrong, or how can I achieve that for both GreyScale and Coloured pictures.

    Read the article

  • Is it necessary to look, website same in all browser?

    - by metal-gear-solid
    Is it necessary to look, website same in all browser? are some client mad? isn't it need of pixel perfection ( to inch to inch match with Design) a madness? isn't it asking IE6 Pixel perfection, as of now, is a madness? Is it sin to to use JavaScript for visual enhancement, like to enable border radius in IE using Javascript and from CSS in other browsers?

    Read the article

  • How do I create a simple video editing filter for DirectShow or DMO?

    - by Ole Jak
    How do I create a simple video editing filter for DirectShow or DMO? What I need is simple - a tutorial or tutorials on how to create simple filter (like a brightness/contrast adjustment filter or any other pixel-per-pixel kind of filter) for filtering Direct Show Video astream (so I want to have a graph like "my Web Kamera" - "My photoshop like filter" - "rendering (or saving to file)".

    Read the article

  • image magick ,text arc

    - by manu
    system(' convert -size 320x100 xc:lightblue -font Courier -pointsize 72 \ -fill navy -annotate +25+65 \'Ernakulam1\' \ -virtual-pixel transparent -distort arc 120 \ -bordercolor lightblue font_arcnew.jpg'); This coe is not working Example arc the text mainly this code is not working -virtual-pixel transparent -distort arc 120

    Read the article

  • GDI+: Set all pixels to given color

    - by Charles
    What is the best way to set the RGB components of every pixel in a System.Drawing.Bitmap to a single, solid color? If possible, I'd like to avoid manually looping through each pixel to do this. Note: I want to keep the same alpha component from the original bitmap. I only want to change the RGB values. I looked into using a ColorMatrix or ColorMap, but I couldn't find any way to set all pixels to a specific given color with either approach.

    Read the article

  • How do I create a simple video effect filter for DirectShow or DMO?

    - by Ole Jak
    How do I create a simple video effect filter for DirectShow or DMO? What I need is simple - a tutorial or tutorials on how to create simple filter (like a brightness/contrast adjustment filter or any other pixel-per-pixel kind of filter) for filtering Direct Show Video astream (so I want to have a graph like "my Web Kamera" - "My photoshop like filter" - "rendering (or saving to file)".

    Read the article

  • cookie not getting tracked why?

    - by user158721
    hi, on one domain i use command as:: setcookie( "cookiename", "cookievalue", expires after two days, "/", "domain1.com" ); on other domain i used a pixel code as that url not able to read cookie , but the same url able to read cookie when it is called directly. when i put htat url as a pixel code on other domain . it is not able to read value. what might be the problem for this ?? Best Regards, Satish Kalepu

    Read the article

  • jQuery: snapped scrolling - possible?

    - by Fuxi
    hi all, i'm having a scrollable table with fixed header. would it be possible to have "snapped scrolling" on the scrollbar - which means that the table rows won't scroll pixel by pixel but snap responding to its row height, for better viewing. thx

    Read the article

  • Center image over background color taken from corner

    - by joebert
    I'm looking for a way that I can take say, a 200x200 pixel image and center it over a background that's 500x500 pixels. The background should be the color of the top-left corner of the 200x200 pixel image. I get the -gravity and -fill flags, but I'm having trouble finding a way to grab that top-left corners color to pass to the -fill flag.

    Read the article

  • matplotlib equivalent for MATLABs truesize()

    - by Lucas
    I am new to matplotlib and python and would like to display an image so that 1 pixel of the image is actually represented by 1 pixel in the figure. In MATLAB, this is achieved with the command truesize(). How can I do this in Python? I tried playing around with the imshow() arguments as well as set_dpi() and set_figwidth()/set_figheight(), but with no luck. Thanks.

    Read the article

  • Get all Vector2 Points Within Radius

    - by CheapDevotion
    I am looking for a formula that will give me all of the Vector2 Points within a certain radius given the center. Essentially what I am trying to do is change the color of each pixel in a 256 x 256 texture that is within a certain radius from a specific pixel (Using the Unity3d Game Engine). Programming Language doesn't really matter, as I can probably convert it to something I can use.

    Read the article

  • Deferred rendering with VSM - Scaling light depth loses moments

    - by user1423893
    I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights. // Shadow term (1 = no shadow) float shadow = 1; // [Light Space -> Shadow Map Space] // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the // "light shader" abstraction and doesn't limit the number of shadowed lights float4x4 LightViewProjection = mul(LightView, LightProjection); float4 surf_tex = mul(position, LightViewProjection); // Re-homogenize // 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized) surf_tex.xyz /= surf_tex.w; // Rescale viewport to be [0,1] (texture coordinate system) float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = -surf_tex.y * 0.5f + 0.5f; // Half texel offset //shadow_tex += (0.5 / 512); // Scaled distance to light (instead of 'surf_tex.z') float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; //float rescaled_dist_to_light = surf_tex.z; // [Variance Shadow Map Depth Calculation] // No filtering float2 moments = tex2D(ShadowSampler, shadow_tex).xy; // Flip the moments values to bring them back to their original values moments.x = 1.0 - moments.x; moments.y = 1.0 - moments.y; // Compute variance float E_x2 = moments.y; float Ex_2 = moments.x * moments.x; float variance = E_x2 - Ex_2; variance = max(variance, Bias.y); // Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1) // One-tailed inequality valid if float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x); // Compute probabilistic upper bound (mean distance) float m_d = moments.x - rescaled_dist_to_light; // Chebychev's inequality float p = variance / (variance + m_d * m_d); p = ReduceLightBleeding(p, Bias.z); // Adjust the light color based on the shadow attenuation shadow *= max(lit_factor, p); This is what I know for certain so far: The lighting is correct if I do not try and calculate the shadow term. (No shadows) The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights) With the current rescaled light distance (lightAttenuation.y is the far plane value): float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; The light is correct and the shadow appears to be zoomed in and misses the blurring: When I do not rescale the light and use the homogenized 'surf_tex': float rescaled_dist_to_light = surf_tex.z; the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit Why is scaling by the far plane value (LightAttenuation.y) zooming in too far? The only other factor involved is my world pixel position, which is calculated as follows: // [Position] float4 position; // [Screen Position] position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component position.z = 1.0 - position.z; position.w = 1.0; // 1.0 = position.w / position.w // [World Position] position = mul(position, CameraViewProjectionInverse); // Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close) position.xyz /= position.w; position.w = 1.0; Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation? EDIT: Light calculations for shadow including 'dist_to_light' // Work out the light position and direction in world space float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43); // Direction might need to be negated float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33); // Unnormalized light vector float3 dir_to_light = light_position - position; // Direction from vertex float dist_to_light = length(dir_to_light); // Normalise 'toLight' vector for lighting calculations dir_to_light = normalize(dir_to_light); EDIT2: These are the calculations for the moments (depth) //============================================= //---[Vertex Shaders]-------------------------- //============================================= DepthVSOutput depth_VS( float4 Position : POSITION, uniform float4x4 shadow_view, uniform float4x4 shadow_view_projection) { DepthVSOutput output = (DepthVSOutput)0; // First transform position into world space float4 position_world = mul(Position, World); output.position_screen = mul(position_world, shadow_view_projection); output.light_vec = mul(position_world, shadow_view).xyz; return output; } //============================================= //---[Pixel Shaders]--------------------------- //============================================= DepthPSOutput depth_PS(DepthVSOutput input) { DepthPSOutput output = (DepthPSOutput)0; // Work out the depth of this fragment from the light, normalized to [0, 1] float2 depth; depth.x = length(input.light_vec) / FarPlane; depth.y = depth.x * depth.x; // Flip depth values to avoid floating point inaccuracies depth.x = 1.0f - depth.x; depth.y = 1.0f - depth.y; output.depth = depth.xyxy; return output; } EDIT 3: I have tried the folloiwng: float4 pp; pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component pp.z = 1.0 - pp.z; pp.w = 1.0; // 1.0 = position.w / position.w // Determine the depth of the pixel with respect to the light float4x4 LightViewProjection = mul(LightView, LightProjection); float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection); float4 vPositionLightCS = mul(pp, matViewToLightViewProj); float fLightDepth = vPositionLightCS.z / vPositionLightCS.w; // Transform from light space to shadow map texture space. float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f); vShadowTexCoord.y = 1.0f - vShadowTexCoord.y; // Offset the coordinate by half a texel so we sample it correctly vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize This suffers the same problem as the second picture. I have tried storing the depth based on the view x projection matrix: output.position_screen = mul(position_world, shadow_view_projection); //output.light_vec = mul(position_world, shadow_view); output.light_vec = output.position_screen; depth.x = input.light_vec.z / input.light_vec.w; This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though. EDIT 4: Found an OpenGL based tutorial here I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect. The source uses a scaled matrix to get the uv coordinates for the shadow map sampler /// <summary> /// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region. /// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0. /// <summary> const float4x4 ScaleMatrix = float4x4 ( 0.5, 0.0, 0.0, 0.0, 0.0, -0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct. Is this really the correct way to scale? float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = surf_tex.y * -0.5f + 0.5f; The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.

    Read the article

  • Fragment shaders on a texture

    - by Snowangelic
    Hello stack overflow. I am trying to add some post-processing capabilities to a program. The rendering is done using openGL. I just want to allow the program to load some home made fragment shader and use them on the video stream. I wrote a little piece of shader using "OpenGL Shader Builder" that just turns a texture in grayscale. The shaders works well in the shader builder but I can't make it work in the main program. The screens stays all black. Here is the setup : @implementation PluginGLView - (id) initWithCoder: (NSCoder *) coder { const GLubyte * strExt; if ((self = [super initWithCoder:coder]) == nil) return nil; glLock = [[NSLock alloc] init]; if (nil == glLock) { [self release]; return nil; } // Init pixel format attribs NSOpenGLPixelFormatAttribute attrs[] = { NSOpenGLPFAAccelerated, NSOpenGLPFANoRecovery, NSOpenGLPFADoubleBuffer, 0 }; // Get pixel format from OpenGL NSOpenGLPixelFormat* pixFmt = [[NSOpenGLPixelFormat alloc] initWithAttributes:attrs]; if (!pixFmt) { NSLog(@"No Accelerated OpenGL pixel format found\n"); NSOpenGLPixelFormatAttribute attrs2[] = { NSOpenGLPFANoRecovery, 0 }; // Get pixel format from OpenGL pixFmt = [[NSOpenGLPixelFormat alloc] initWithAttributes:attrs2]; if (!pixFmt) { NSLog(@"No OpenGL pixel format found!\n"); [self release]; return nil; } } [self setPixelFormat:[pixFmt autorelease]]; /* long swapInterval = 1 ; [[self openGLContext] setValues:&swapInterval forParameter:NSOpenGLCPSwapInterval]; */ [glLock lock]; [[self openGLContext] makeCurrentContext]; // Init object members strExt = glGetString (GL_EXTENSIONS); texture_range = gluCheckExtension ((const unsigned char *)"GL_APPLE_texture_range", strExt) ? GL_TRUE : GL_FALSE; texture_hint = GL_STORAGE_SHARED_APPLE ; client_storage = gluCheckExtension ((const unsigned char *)"GL_APPLE_client_storage", strExt) ? GL_TRUE : GL_FALSE; rect_texture = gluCheckExtension((const unsigned char *)"GL_EXT_texture_rectangle", strExt) ? GL_TRUE : GL_FALSE; // Setup some basic OpenGL stuff glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); glColor4f(1.0f, 1.0f, 1.0f, 1.0f); glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // Loads the shaders shader=LoadShader(GL_FRAGMENT_SHADER,"/Users/alexandremathieu/fragment.fs"); program=glCreateProgram(); glAttachShader(program, shader); glLinkProgram(program); glUseProgram(program); [NSOpenGLContext clearCurrentContext]; [glLock unlock]; image_width = 1024; image_height = 512; image_depth = 16; image_type = GL_UNSIGNED_SHORT_1_5_5_5_REV; image_base = (GLubyte *) calloc(((IMAGE_COUNT * image_width * image_height) / 3) * 4, image_depth >> 3); if (image_base == nil) { [self release]; return nil; } // Create and load textures for the first time [self loadTextures:GL_TRUE]; // Init fps timer //gettimeofday(&cycle_time, NULL); drawBG = YES; // Call for a redisplay noDisplay = YES; PSXDisplay.Disabled = 1; [self setNeedsDisplay:true]; return self; } And here is the "render screen" function wich basically...renders the screen. - (void)renderScreen { int bufferIndex = whichImage; glBindTexture(GL_TEXTURE_RECTANGLE_EXT, bufferIndex+1); glUseProgram(program); int loc=glGetUniformLocation(program, "texture"); glUniform1i(loc,bufferIndex+1); glTexSubImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, 0, 0, image_width, image_height, GL_BGRA, image_type, image[bufferIndex]); glBegin(GL_QUADS); glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0f, 1.0f); glTexCoord2f(0.0f, image_height); glVertex2f(-1.0f, -1.0f); glTexCoord2f(image_width, image_height); glVertex2f(1.0f, -1.0f); glTexCoord2f(image_width, 0.0f); glVertex2f(1.0f, 1.0f); glEnd(); [[self openGLContext] flushBuffer]; [NSOpenGLContext clearCurrentContext]; //[glLock unlock]; } and finally here's the shader. uniform sampler2DRect texture; void main() { vec4 color, texel; color = gl_Color; texel = texture2DRect(texture, gl_TexCoord[0].xy); color *= texel; // Begin Shader float gray=0.0; gray+=(color.r + color.g + color.b)/3.0; color=vec4(gray,gray,gray,color.a); // End Shader gl_FragColor = color; } The loading and using of shaders works since I am able to turn the screen all red with this shader void main(){ gl_FragColor=vec4(1.0,0.0,0.0,1.0); } If the shader contains a syntax error I get an error message from the LoadShader function etc. If I remove the use of the shader, everything works normally. I think the problem comes from the "passing the texture as a uniform parameter" thing. But these are my very firsts step with openGL and I cant be sure of anything. Don't hesitate to ask for more info. Thank you Stack O.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >