Search Results

Search found 1700 results on 68 pages for 'pixel shading'.

Page 23/68 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • Android Canvas Coordinate System

    - by Mitch
    I'm trying to find information on how to change the coordinate system for the canvas. I have some vector data I'd like to draw to a canvas using things like circles and lines, but the data's coordinate system doesn't match the canvas coordinate system. Is there a way to map the units I'm using to the screen's units? I'm drawing to an ImageView which isn't taking up the entire display. If I have to do my own calculations prior to each drawing call, how to I find the width and height of my ImageView? The getWidth() and getHeight() calls I tried seem to be returning the entire canvas size and not the size of the ImageView which isn't helpful. I see some matrix stuff, is that something that will work for me? I tried to use the "public void scale(float sx, float sy)", but that works more like a pixel level zoom rather than a vector scale function by expanding each pixel. This means if the dimensions are increased to fit the screen, the line thickness is also increased. Update: After some research I'm starting to think there's no way to change coordinate systems to something else. I'll need to map all my coordinates to the screen's pixel coordinates and do so by modifying each vector. The getWidth() and getHeight() seem to be working better for me now. I can say what was wrong, but I suspect I can't use these methods inside the constructor.

    Read the article

  • File Format Conversion to TIFF. Some issues???

    - by Nains
    I'm having a proprietary image format SNG( a proprietary format) which is having a countinous array of Image data along with Image meta information in seperate HDR file. Now I need to convert this SNG format to a Standard TIFF 6.0 Format. So I studied the TIFF format i.e. about its Header, Image File Directories( IFD's) and Stripped Image Data. Now I have few concerns about this conversion. Please assist me. SNG Continous Data vs TIFF Stripped Data: Should I convert SNG Data to TIFF as a continous data in one Strip( data load/edit time problem?) OR make logical StripOffsets of the SNG Image data. SNG Data Header uses only necessary Meta Information, thus while converting the SNG to TIFF, some information can’t be retrieved such as NewSubFileType, Software Tag etc. So this raises a concern that after conversion whether any missing directory information such as NewSubFileType, Software Tag etc is necessary and sufficient condition for TIFF File. Encoding of each pixel component of RGB Sample in SNG data: Here each SNG Image Data Strip per Pixel component is encoded as: Out^[i] := round( LineBuffer^[i * 3] * **0.072169** + LineBuffer^[i * 3 + 1] * **0.715160** + LineBuffer^[i * 3+ 2]* **0.212671**); Only way I deduce from it is that each Pixel is represented with 3 RGB component and some coefficient is multiplied with each component to make the SNG Viewer work RGB color information of SNG Image Data. (Developer who earlier work on this left, now i am following the trace :)) Thus while converting this to TIFF, the decoding the same needs to be done. This raises a concern that the how RBG information in TIFF is produced, or better do we need this information?. Please assist...

    Read the article

  • Editing 8bpp indexed Bitmaps

    - by Pedro Sá
    hi, i'm trying to edit the pixels of a 8bpp. Since this PixelFormat is indexed i'm aware that it uses a Color Table to map the pixel values. Even though I can edit the bitmap by converting it to 24bpp, 8bpp editing is much faster (13ms vs 3ms). But, changing each value when accessing the 8bpp bitmap results in some random rgb colors even though the PixelFormat remains 8bpp. I'm currently developing in c# and the algorithm is as follows: (C#) 1- Load original Bitmap at 8bpp 2- Create Empty temp Bitmap with 8bpp with the same size as the original 3-LockBits of both bitmaps and, using P/Invoke, calling c++ method where I pass the Scan0 of each BitmapData object. (I used a c++ method as it offers better performance when iterating through the Bitmap's pixels) (C++) 4- Create a int[256] palette according to some parameters and edit the temp bitmap bytes by passing the original's pixel values through the palette. (C#) 5- UnlockBits. My question is how can I edit the pixel values without having the strange rgb colors, or even better, edit the 8bpp bitmap's Color Table? Regards, Pedro

    Read the article

  • Using Sendkeys in python to press {F12} results in other keys pressed?

    - by ThantiK
    import time from ctypes import * import win32gui import win32com.client as comclt X = 119 Y = 53 def PILColorToRGB(pil_color): """ convert a PIL-compatible integer into an (r, g, b) tuple """ hexstr = '%06x' % pil_color # reverse byte order r, g, b = hexstr[4:], hexstr[2:4], hexstr[:2] r, g, b = [int(n, 16) for n in (r, g, b)] return (r, g, b) wsh = comclt.Dispatch("WScript.Shell") w = win32gui user = windll.LoadLibrary("c:\\windows\\system32\\user32.dll") h = user.GetDC(0) gdi = windll.LoadLibrary("c:\\windows\\system32\\gdi32.dll") while True: FG = w.GetWindowText(w.GetForegroundWindow()) #FG = Foreground window title. if FG == "World of Warcraft": rgb = (PILColorToRGB(gdi.GetPixel(h,X,Y))) #X, Y time.sleep(0.333) #don't check too often. if (rgb[0] >= 130): #While Pixel (X, Y) is Red... #print "%d %d %d" % (rgb[0], rgb[1], rgb[2]) #Debug wsh.SendKeys("{F12}") #Send a key. time.sleep(0.7) #Add some extra down-time if we send the key. else: time.sleep(5) Basically all this code does is read a pixel on the screen, and send a key (F12) if the pixel is red. But when using this code I regularly get some phantom key-code being pressed. The application I'm using this on is obviously world of warcraft, and I have checked that all keybinds are standard keybinds. However randomly it seems I get either an up arrow, or a w pressed, which moves my character forward whenever this code executes (F12 is bound to a macro, unbound from any movement. If I press f12 with a hardware event, it does not exhibit this behavior. What in the world could be going on here?

    Read the article

  • How to scale a sprite image without losing color key information?

    - by Michael P
    Hello everyone, I'm currently developing a simple application that displays map and draws some markers on it. I'm developing for Windows Mobile, so I decided to use DirectDraw and Imaging interfaces to make the application fast and pretty. The map moves when user moves finger on the touchscreen, so the whole map moving/scrolling animation has to be fast, but it is not. On every map update I have to draw portion of the map, control buttons, and markers - buttons and markers are preloaded on DirectDraw surface as a mipmap. So the only thing I do is BitBlit from the mipmap to a back buffer, and from the back buffer to a primary surface (I can't use page flipping due to the windowed mode of my application). Previously I used premultiplied-alpha surface with 32 bit ARGB pixel format for images mipmap, everything was looking good, but drawing entire "scene" was horribly slow - i could forget about smooth map scrolling. Now I'm using mipmap with native (RGB565) pixel format and fuchsia (0xFF00FF) color key. Drawing is much better my mipmap surface is generated on program loading - images are loaded from files, scaled (with filtering) and drawn on mipmap. The problem is, that image scaling process blends pixel colors, and those pixels which are on the border of a sprite region are blended with surrounding fuchsia pixels resulting semi-fuchsia color that is not treated as color key. When I do blitting with color key option, sprites have small fuchsia-like borders, and it looks really bad. How to solve this problem? I can use alpha blitting, but it is too slow - even in ARGB 1555 format.

    Read the article

  • Updating UIView subclass contents

    - by Brett
    Hi; I am trying to update a UIView subclass to draw individual pixels (rectangles) after calculating if they are in the mandelbrot set. I am using the following code but am getting a blank screen: //drawingView.h ... -(void)drawRect:(CGRect)rect{ CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetRGBFillColor(context, r, g, b, 1); CGContextSetRGBStrokeColor(context, r, g, b, 1); CGRect arect = CGRectMake(x,y,1,1); CGContextFillRect(context, arect); } ... //viewController.m ... - (void)viewDidLoad { [drawingView setX:10]; [drawingView setY:10]; [drawingView setR:0.0]; [drawingView setG:0.0]; [drawingView setB:0.0]; [super viewDidLoad]; [drawingView setNeedsDisplay]; } Once I figure out how to display one pixel, I'd like to know how to go about updating the view by adding another pixel (once calculated). Once all the variables change, how do I keep the current view as is but add another pixel? Thanks, Brett

    Read the article

  • Qt/C++, Problems with large QImage

    - by David Günzel
    I'm pretty new to C++/Qt and I'm trying to create an application with Visual Studio C++ and Qt (4.8.3). The application displays images using a QGraphicsView, I need to change the images at pixel level. The basic code is (simplified): QImage* img = new QImage(img_width,img_height,QImage::Format_RGB32); while(do_some_stuff) { img->setPixel(x,y,color); } QGraphicsPixmapItem* pm = new QGraphicsPixmapItem(QPixmap::fromImage(*img)); QGraphicsScene* sc = new QGraphicsScene; sc->setSceneRect(0,0,img->width(),img->height()); sc->addItem(pm); ui.graphicsView->setScene(sc); This works well for images up to around 12000x6000 pixel. The weird thing happens beyond this size. When I set img_width=16000 and img_height=8000, for example, the line img = new QImage(...) returns a null image. The image data should be around 512,000,000 bytes, so it shouldn't be too large, even on a 32 bit system. Also, my machine (Win 7 64bit, 8 GB RAM) should be capable of holding the data. I've also tried this version: uchar* imgbuf = (uchar*) malloc(img_width*img_height*4); QImage* img = new QImage(imgbuf,img_width,img_height,QImage::Format_RGB32); At first, this works. The img pointer is valid and calling img-width() for example returns the correct image width (instead of 0, in case the image pointer is null). But as soon as I call img-setPixel(), the pointer becomes null and img-width() returns 0. So what am I doing wrong? Or is there a better way of modifying large images on pixel level? Regards, David

    Read the article

  • Help in J2ME for creating image and parse it

    - by HAMED
    Can anyone tell me how i can parse a png image into many png images in J2ME??? for examole:I just wnat to have a source image 150*150 pixel and parse it to 10 image with 15*15 pixel. I write an elemantary code that have exeption. This is my code: public class HelloMIDlet extends MIDlet implements CommandListener { private boolean midletPaused = false; private Command exitCommand; private Form form; private StringItem stringItem; Image im , im2; Form form1 = null; public HelloMIDlet() { try { // create source image im = Image.createImage("/image1.JPG"); int height = im.getHeight() ; int width = im.getWidth() ; int x = 0 ; int y = 0 ; while ( y < height ){ while ( x < width ){ // create 15*15 pixel of source image im2 = im.createImage(im, x, y, 15, 15, Sprite.TRANS_NONE) ; x += 15 ; } y += 15 ; x = 0 ; } } catch (IOException ex) { ex.printStackTrace(); } } please help me to make it right....It's emergency! Thanks a lot guys...

    Read the article

  • Screen information while Windows system is locked (.NET)

    - by Matt
    We have a nightly process that updates applications on a user's pc, and that requires bringing the application down and back up again (not looking to get into changing that process). The problem is that we are building a Windows AppBar on launch which requires a valid screen, and when the system is locked there isn't one in the Screen class. So none of the visual effects are enabled and it shows up real ugly. The only way we currently have around this is to detect a locked screen and just spin and wait until the user unlocks the desktop, then continue launching. Leaving it down isn't an option, as this is a key part of our user's workflow, and they expect it to be up and running if they left it that way the night before. Any ideas?? I can't seem to find the display information anywhere, but it has to be stored off someplace, since the user is still logged in. The contents of the Screen.AllScreens array: ** When Locked: Device Name : DISPLAY Primary : True Bits Per Pixel : 0 Bounds : {X=-1280,Y=0,Width=2560,Height=1024} Working Area : {X=0,Y=0,Width=1280,Height=1024} ** When Unlocked: Device Name : \\.\DISPLAY1 Primary : True Bits Per Pixel : 32 Bounds : {X=0,Y=0,Width=1280,Height=1024} Working Area : {X=0,Y=0,Width=1280,Height=994} Device Name : \\.\DISPLAY2 Primary : False Bits Per Pixel : 32 Bounds : {X=-1280,Y=0,Width=1280,Height=1024} Working Area : {X=-1280,Y=0,Width=1280,Height=964}

    Read the article

  • Python: Serial Transmission

    - by Silent Elektron
    I have an image stack of 500 images (jpeg) of 640x480. I intend to make 500 pixels (1st pixels of all images) as a list and then send that via COM1 to FPGA where I do my further processing. I have a couple of questions here: How do I import all the 500 images at a time into python and how do i store it? How do I send the 500 pixel list via COM1 to FPGA? I tried the following: Converted the jpeg image to intensity values (each pixel is denoted by a number between 0 and 255) in MATLAB, saved the intensity values in a text file, read that file using readlines(). But it became too cumbersome to make the intensity value files for all the 500 images! Used NumPy to put the read files in a matrix and then pick the first pixel of all images. But when I send it, its coming like: [56, 61, 78, ... ,71, 91]. Is there a way to eliminate the [ ] and , while sending the data serially? Thanks in Advance! :)

    Read the article

  • Selenium Webdriver Java - looking for alternatives for Actions and Robot when performing drag-and-drop

    - by Ja-ke Alconcel
    I first tried Actions class and the drag-and-drop does work on different elements, however it was unable to locate the a specific draggable element on it's exact screen/webpage position. Here's the code I've used: Point loc = driver.findElement(By.id("thiselement")).getLocation(); System.out.println(loc); WebElement drag = driver.findElement(By.id("thiselement")); Actions test = new Actions(driver); test.dragAndDropBy(drag, 0, 60).build().perform(); I checked the element with it's pixel location and it prints (837, -52), which was somewhere on top of the webpage and was pixels away from the actual element. Then I tried using the Robot class and works perfectly fine on my script, but can only provide constant successful runs on a single test machine, running it with a different machine with different screen resolution and screen size will render the script to fail due to the dependency of Robot on the pixel location of the element. The sample code of the Robot script I'm using: Robot dragAndDrop = new Robot(); dragAndDrop.mouseMove(945, 166); //actual pixel location of the draggable element dragAndDrop.mousePress(InputEvent.BUTTON1_MASK); sleep(3000); dragAndDrop.mouseMove(945, 226); dragAndDrop.mouseRelease(InputEvent.BUTTON1_MASK); sleep(3000); Is there any alternative for Actions and Robot to automate drag-and-drop? Or maybe a help on working the script to work on Actions as I really can't use Robot. Thanks in advance.

    Read the article

  • "java.lang.OutOfMemoryError: Java heap space" in image and array storage

    - by totalconscience
    I am currently working on an image processing demonstration in java (Applet). I am running into the problem where my arrays are too large and I am getting the "java.lang.OutOfMemoryError: Java heap space" error. The algorithm I run creates an NxD float array where: N is the number of pixel in the image and D is the coordinates of each pixel plus the colorspace components of each pixel (usually 1 for grayscale or 3 for RGB). For each iteration of the algorithm it creates one of these NxD float arrays and stores it for later use in a vector, so that the user of the applet may look at the individual steps. My client wants the program to be able to load a 500x500 RGB image and run as the upper bound. There are about 12 to 20 iterations per run so that means I need to be able to store a 12x500x500x5 float in some fashion. Is there a way to process all of this data and, if possible, how? Example of the issue: I am loading a 512 by 512 Grayscale image and even before the first iteration completes I run out of heap space. The line it points me to is: Y.add(new float[N][D]) where Y is a Vector and N and D are described as above. This is the second instance of the code using that line.

    Read the article

  • how to make a CUDA Histogram kernel?

    - by kitw
    Hi all, I am writing a CUDA kernel for Histogram on a picture, but I had no idea how to return a array from the kernel, and the array will change when other thread read it. Any possible solution for it? __global__ void Hist( TColor *dst, //input image int imageW, int imageH, int*data ){ const int ix = blockDim.x * blockIdx.x + threadIdx.x; const int iy = blockDim.y * blockIdx.y + threadIdx.y; if(ix < imageW && iy < imageH) { int pixel = get_red(dst[imageW * (iy) + (ix)]); //this assign specific RED value of image to pixel data[pixel] ++; // ?? problem statement ... } } @para d_dst: input image TColor is equals to float4. @para data: the array for histogram size [255] extern "C" void cuda_Hist(TColor *d_dst, int imageW, int imageH,int* data) { dim3 threads(BLOCKDIM_X, BLOCKDIM_Y); dim3 grid(iDivUp(imageW, BLOCKDIM_X), iDivUp(imageH, BLOCKDIM_Y)); Hist<<<grid, threads>>>(d_dst, imageW, imageH, data); }

    Read the article

  • Creating transparent PNG with exact RGBA values

    - by rrowland
    I'm color-coding a transparent image to be read programatically. However, the image seems to be getting compressed and my code is reading color values other than the ones I mean to pass it. Concept This is the output I get, exporting as PNG-24. I programatically check each pixel for one of the six colors I use in creating the image: 0x00000F 0x0000F0 0x000F00 0x00F000 0x0F0000 0xF00000 Each color represents a different texture to apply. Top right (0x00000F) will pull texture from the tile to its top right and blend it at a ratio equal to the opacity of the pixel. The end goal is to create a hex tiled grid with differing textures that blend smoothly. What's happening It seems that when converting to PNG, Photoshop will change the RGBA to make it smoother, or just to help compression size. Parts that should be 250 red range anywhere from 150 to 255. Question Whether using PNG or another web-compatible format, I need to be able to save these pixel values, essentially instructions, loss-less and still maintain transparency. Is this possible in any format or will I need to re-think my approach?

    Read the article

  • Voice Recognition Google API

    - by user2966744
    thanks for reading. I'm creating a simple web based drawing app that uses speech recognition. I have created a simple page, the project is on github here: https://github.com/a5hton/speechdraw It has a 16x16 pixel grid. I would like to be able to draw on this grid by using simple words. For example if you say "right", the pixel to the right will be colored black. If you say "down" the pixel below the last one will be colored black. You can say up, down, left or right and the corresponding pixels will be colored. Saying "erase" will switch to erase mode, colouring the pixels back to their original color. Saying "lift" will lift the pen off the page. Saying "draw" will enable the draw mode. Could you please help me work out how to make this happen. Please see the simple page at to get an understanding. Thank you! Cheers, Michael

    Read the article

  • Convert png sequence to x264 with ffmpeg

    - by Thucydides411
    I am trying to convert a series of pngs into an mp4 video. I am using ffmpeg, and want to encode the video with the x264 codec. Using the command ffmpeg -y -r 30 -b 1800k -i _tmp%04d.png -vcodec libx264 out.mp4 I get the following warning message Incompatible pixel format 'bgra' for codec 'libx264', auto-selecting format 'yuv420p' My understanding is that there is an alpha channel in the pngs, which the x264 encoder cannot handle. Is there a way to get around this problem? Is there, for example, a way to get the encoder to ignore the alpha channel (my pngs don't actually have any transparent elements)? I'm aware that I could batch convert the pngs beforehand to strip the alpha channel, but the sequence of images is produced by another program, and having to preprocess the images each time I make a video would be less than optimal. Edit: After stripping the alpha channel from each frame using the command convert in.png -background white -flatten +matte out.png ffmpeg gives the warning message Incompatible pixel format 'pal8' for codec 'libx264', auto-selecting format 'yuv420p' so still no dice.

    Read the article

  • ffmpeg open webcam using YUYV but i want MJPEG

    - by Pavel
    I need ffmpeg to open webcam (logitech c910) in MJPEG mode, because the webcam can give ~24 using MJPEG "protocol" and only ~10 fps using the YUYV. Can i choose between them using ffmpeg command line? xx@(none) ~ $ v4l2-ctl --list-formats ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Pixel Format: 'YUYV' Name : YUV 4:2:2 (YUYV) Index : 1 Type : Video Capture Pixel Format: 'MJPG' (compressed) Name : MJPEG My current command line: ffmpeg -y -f alsa -i hw:3,0 -f video4linux2 -r 20 -s 1280x720 -i /dev/video0 -acodec libfaac -ab 128k -vcodec libx264 /tmp/web.avi ffmpeg produces corrupted h264 stream when i record from webcam, but normal h264 strem when i record from x11grab. Another codecs (mjpeg, mpeg4) works well with webcam... But this is another story.

    Read the article

  • Media Information for Constant and Variable bit rate of Video files

    - by cpx
    What is this Maximum bit rate for a .mp4 format file whose bit rate mode is Constant? Media information displayed for MP4 (Using MediaInfo Tool) ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings, CABAC : No Format settings, ReFrames : 1 frame Codec ID : avc1 Codec ID/Info : Advanced Video Coding Bit rate mode : Constant Bit rate : 1 500 Kbps Maximum bit rate : 3 961 Kbps Display aspect ratio : 4:3 Frame rate mode : Constant Frame rate : 29.970 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Bits/(Pixel*Frame) : 0.163 In this case where the bit rate mode is set to Variable, is the Bit rate field where the value is displayed as 309 is its average bit rate? Media information displayed for M4V (Using MediaInfo Tool) ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings, CABAC : No Format settings, ReFrames : 1 frame Codec ID : avc1 Codec ID/Info : Advanced Video Coding Bit rate mode : Variable Bit rate : 309 Kbps Display aspect ratio : 16:9 Frame rate mode : Variable Frame rate : 23.976 fps Minimum frame rate : 23.810 fps Maximum frame rate : 24.390 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Bits/(Pixel*Frame) : 0.229 Writing library : x264 core 120

    Read the article

  • Extracting the layer transparency into an editable layer mask in Photoshop

    - by last-child
    Is there any simple way to extract the "baked in" transparency in a layer and turn it into a layer mask in Photoshop? To take a simple example: Let's say that I paint a few strokes with a semi-transparent brush, or paste in a .png-file with an alpha channel. The rgb color values and the alpha value for each pixel are now all contained in the layer-image itself. I would like to be able to edit the alpha values as a layer mask, so that the layer image is solid and contains only the RGB values for each pixel. Is this possible, and in that case how? Thanks. EDIT: To clarify - I'm not really after the transparency values in themselves, but in the separation of rgb values and alpha values. That means that the layer must become a solid, opaque image with a mask.

    Read the article

  • High-res icon in Windows Vista alt-tab thumbnail preview?

    - by netvope
    I have customized my alt-tab screen with the following: [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\AltTab] "OverlayIconPx"=dword:00000040 "MaxThumbSizePx"=dword:00000100 "MinThumbSizePcent"=dword:00000064 It works great: the thumbnail becomes 256 pixel wide and the icon at the corner of the thumbnail becomes 64x64 pixels. However, Windows doesn't load the high-res icons from the programs; instead, it uses the 16x16 pixel icon and scaled it up by nearest-neighbor. I'm sure the programs has high-res icons because I saw them with in "Extra Large Icon" view in Explorer. So the question is: How can I force Windows to load the high-res icons for the alt-tab thumbnail preview? (Perhaps a registry key, or a .dll hack/injection?)

    Read the article

  • Can I make Firefox ignore/interpret font sizes specified in pixels?

    - by Andy
    Hi all, I have an 11.1" notebook display with 1366x768 resolution, which gives it a DPI of 141. I'm running GNOME and have configured the DPI. Everything works OK except web browsing - far too many websites specify their font sizes in pixels, which ends up with very small text on a high DPI display. My ideal solution would be for Firefox to interpret an absolute pixel size in terms of normal DPI and display it appropriately for my DPI (eg scale it by 141/96). Obviously this would cause problems on the occasion where graphics had been pixel-aligned with fonts in some way, but I imagine that would cause me far less of a headache than either reading minute text, or scaling the text manually each time. Any suggestions? TIA, Andy

    Read the article

  • Java: how to do fast copy of a BufferedImage's pixels? (include unit test)

    - by WizardOfOdds
    I want to do a copy (of a rectangle area) of the ARGB values from a source BufferedImage into a destination BufferedImage. No compositing should be done: if I copy a pixel with an ARGB value of 0x8000BE50 (alpha value at 128), then the destination pixel must be exactly 0x8000BE50, totally overriding the destination pixel. I've got a very precise question and I made a unit test to show what I need. The unit test is fully functional and self-contained and is passing fine and is doing precisely what I want. However, I want a faster and more memory efficient method to replace copySrcIntoDstAt(...). That's the whole point of my question: I'm not after how to "fill" the image in a faster way (what I did is just an example to have a unit test). All I want is to know what would be a fast and memory efficient way to do it (ie fast and not creating needless objects). The proof-of-concept implementation I've made is obviously very memory efficient, but it is slow (doing one getRGB and one setRGB for every pixel). Schematically, I've got this: (where A indicates corresponding pixels from the destination image before the copy) AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA And I want to have this: AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAABBBBAAA AAAAAAAAAAAAABBBBAAA AAAAAAAAAAAAAAAAAAAA where 'B' represents the pixels from the src image. I'm looking for an exact replacement of the method, not for an API link/quote. import org.junit.Test; import java.awt.image.BufferedImage; import static org.junit.Assert.*; public class TestCopy { private static final int COL1 = 0x8000BE50; // alpha at 128 private static final int COL2 = 0x1732FE87; // alpha at 23 @Test public void testPixelsCopy() { final BufferedImage src = new BufferedImage( 5, 5, BufferedImage.TYPE_INT_ARGB ); final BufferedImage dst = new BufferedImage( 20, 20, BufferedImage.TYPE_INT_ARGB ); convenienceFill( src, COL1 ); convenienceFill( dst, COL2 ); copySrcIntoDstAt( src, dst, 3, 4 ); for (int x = 0; x < dst.getWidth(); x++) { for (int y = 0; y < dst.getHeight(); y++) { if ( x >= 3 && x <= 7 && y >= 4 && y <= 8 ) { assertEquals( COL1, dst.getRGB(x,y) ); } else { assertEquals( COL2, dst.getRGB(x,y) ); } } } } // clipping is unnecessary private static void copySrcIntoDstAt( final BufferedImage src, final BufferedImage dst, final int dx, final int dy ) { // TODO: replace this by a much more efficient method for (int x = 0; x < src.getWidth(); x++) { for (int y = 0; y < src.getHeight(); y++) { dst.setRGB( dx + x, dy + y, src.getRGB(x,y) ); } } } // This method is just a convenience method, there's // no point in optimizing this method, this is not what // this question is about private static void convenienceFill( final BufferedImage bi, final int color ) { for (int x = 0; x < bi.getWidth(); x++) { for (int y = 0; y < bi.getHeight(); y++) { bi.setRGB( x, y, color ); } } } }

    Read the article

  • Different cursor formats in IOFrameBufferShared

    - by Thomi
    Hi, I'm reading the moust cursor pixmap data from the StdFBShmem_t structure, as defined in the IOFrameBufferShared API. Everything works fine, 90% of the time. However, I have noticed that some applications on the mac set a cursor in a different format. According to the documentation for the data structures, the cursor pixmap format should always be in the same format as the frame buffer. My frame buffer is 32BPP. I expect the pixmap data to be in the format 0xAARRGGBB, which is it. However, in some cases, I'm reading data that looks like a mask. Specifically, the pixel will either be 0x00FFFFFF or `0x00000000. This looks to me to be a mask for separate pixel data stored somewhere else. As far as I can tell, the only application that uses this cursor pixel format is Qt Creator, but I need to work with all applications, so I'd like to sort this out. The code I'm using to read the cursor pixmap data is: NSAutoreleasePool *autoReleasePool = [[NSAutoreleasePool alloc] init]; NSPoint mouseLocation = [NSEvent mouseLocation]; NSArray *allScreens = [NSScreen screens]; NSEnumerator *screensEnum = [allScreens objectEnumerator]; NSScreen *screen; NSDictionary *screenDesc = nil; while ((screen = [screensEnum nextObject])) { NSRect screenFrame = [screen frame]; screenDesc = [screen deviceDescription]; if (NSMouseInRect(mouseLocation, screenFrame, NO)) break; } if (screen) { kern_return_t err; CGDirectDisplayID displayID = (CGDirectDisplayID) [[screenDesc objectForKey:@"NSScreenNumber"] pointerValue]; task_port_t taskPort = mach_task_self(); io_service_t displayServicePort = CGDisplayIOServicePort(displayID); io_connect_t displayConnection =0; err = IOFramebufferOpen(displayServicePort, taskPort, kIOFBSharedConnectType, &displayConnection); if (KERN_SUCCESS == err) { union { vm_address_t vm_ptr; StdFBShmem_t *fbshmem; } cursorInfo; vm_size_t size; err = IOConnectMapMemory(displayConnection, kIOFBCursorMemory, taskPort, &cursorInfo.vm_ptr, &size, kIOMapAnywhere | kIOMapDefaultCache | kIOMapReadOnly); if (KERN_SUCCESS == err) { // for some reason, cursor data is not always in the same format as the frame buffer. For this reason, we need // some way to detect which structure we should be reading. QByteArray pixData((const char*)cursorInfo.fbshmem->cursor.rgb24.image[currentFrame], m_mouseInfo.currentSize.width() * m_mouseInfo.currentSize.height() * 4); IOConnectUnmapMemory(displayConnection, kIOFBCursorMemory, taskPort, cursorInfo.vm_ptr); } // IOConnectMapMemory else qDebug() << "IOConnectMapMemory Failed:" << err; IOServiceClose(displayConnection); } // IOServiceOpen else qDebug() << "IOFramebufferOpen Failed:" << err; }// if screen [autoReleasePool release]; My question is: How can I detect if the cursor is a different format from the framebuffer? Where can I read the actual pixel data? the bm18Cursor structure contains a mask section, but it's not in the right place for me to be reading it using the code above. Cheers,

    Read the article

  • Parallelism in .NET – Part 2, Simple Imperative Data Parallelism

    - by Reed
    In my discussion of Decomposition of the problem space, I mentioned that Data Decomposition is often the simplest abstraction to use when trying to parallelize a routine.  If a problem can be decomposed based off the data, we will often want to use what MSDN refers to as Data Parallelism as our strategy for implementing our routine.  The Task Parallel Library in .NET 4 makes implementing Data Parallelism, for most cases, very simple. Data Parallelism is the main technique we use to parallelize a routine which can be decomposed based off data.  Data Parallelism refers to taking a single collection of data, and having a single operation be performed concurrently on elements in the collection.  One side note here: Data Parallelism is also sometimes referred to as the Loop Parallelism Pattern or Loop-level Parallelism.  In general, for this series, I will try to use the terminology used in the MSDN Documentation for the Task Parallel Library.  This should make it easier to investigate these topics in more detail. Once we’ve determined we have a problem that, potentially, can be decomposed based on data, implementation using Data Parallelism in the TPL is quite simple.  Let’s take our example from the Data Decomposition discussion – a simple contrast stretching filter.  Here, we have a collection of data (pixels), and we need to run a simple operation on each element of the pixel.  Once we know the minimum and maximum values, we most likely would have some simple code like the following: for (int row=0; row < pixelData.GetUpperBound(0); ++row) { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This simple routine loops through a two dimensional array of pixelData, and calls the AdjustContrast routine on each pixel. As I mentioned, when you’re decomposing a problem space, most iteration statements are potentially candidates for data decomposition.  Here, we’re using two for loops – one looping through rows in the image, and a second nested loop iterating through the columns.  We then perform one, independent operation on each element based on those loop positions. This is a prime candidate – we have no shared data, no dependencies on anything but the pixel which we want to change.  Since we’re using a for loop, we can easily parallelize this using the Parallel.For method in the TPL: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Here, by simply changing our first for loop to a call to Parallel.For, we can parallelize this portion of our routine.  Parallel.For works, as do many methods in the TPL, by creating a delegate and using it as an argument to a method.  In this case, our for loop iteration block becomes a delegate creating via a lambda expression.  This lets you write code that, superficially, looks similar to the familiar for loop, but functions quite differently at runtime. We could easily do this to our second for loop as well, but that may not be a good idea.  There is a balance to be struck when writing parallel code.  We want to have enough work items to keep all of our processors busy, but the more we partition our data, the more overhead we introduce.  In this case, we have an image of data – most likely hundreds of pixels in both dimensions.  By just parallelizing our first loop, each row of pixels can be run as a single task.  With hundreds of rows of data, we are providing fine enough granularity to keep all of our processors busy. If we parallelize both loops, we’re potentially creating millions of independent tasks.  This introduces extra overhead with no extra gain, and will actually reduce our overall performance.  This leads to my first guideline when writing parallel code: Partition your problem into enough tasks to keep each processor busy throughout the operation, but not more than necessary to keep each processor busy. Also note that I parallelized the outer loop.  I could have just as easily partitioned the inner loop.  However, partitioning the inner loop would have led to many more discrete work items, each with a smaller amount of work (operate on one pixel instead of one row of pixels).  My second guideline when writing parallel code reflects this: Partition your problem in a way to place the most work possible into each task. This typically means, in practice, that you will want to parallelize the routine at the “highest” point possible in the routine, typically the outermost loop.  If you’re looking at parallelizing methods which call other methods, you’ll want to try to partition your work high up in the stack – as you get into lower level methods, the performance impact of parallelizing your routines may not overcome the overhead introduced. Parallel.For works great for situations where we know the number of elements we’re going to process in advance.  If we’re iterating through an IList<T> or an array, this is a typical approach.  However, there are other iteration statements common in C#.  In many situations, we’ll use foreach instead of a for loop.  This can be more understandable and easier to read, but also has the advantage of working with collections which only implement IEnumerable<T>, where we do not know the number of elements involved in advance. As an example, lets take the following situation.  Say we have a collection of Customers, and we want to iterate through each customer, check some information about the customer, and if a certain case is met, send an email to the customer and update our instance to reflect this change.  Normally, this might look something like: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } } Here, we’re doing a fair amount of work for each customer in our collection, but we don’t know how many customers exist.  If we assume that theStore.GetLastContact(customer) and theStore.EmailCustomer(customer) are both side-effect free, thread safe operations, we could parallelize this using Parallel.ForEach: Parallel.ForEach(customers, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); Just like Parallel.For, we rework our loop into a method call accepting a delegate created via a lambda expression.  This keeps our new code very similar to our original iteration statement, however, this will now execute in parallel.  The same guidelines apply with Parallel.ForEach as with Parallel.For. The other iteration statements, do and while, do not have direct equivalents in the Task Parallel Library.  These, however, are very easy to implement using Parallel.ForEach and the yield keyword. Most applications can benefit from implementing some form of Data Parallelism.  Iterating through collections and performing “work” is a very common pattern in nearly every application.  When the problem can be decomposed by data, we often can parallelize the workload by merely changing foreach statements to Parallel.ForEach method calls, and for loops to Parallel.For method calls.  Any time your program operates on a collection, and does a set of work on each item in the collection where that work is not dependent on other information, you very likely have an opportunity to parallelize your routine.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >