Search Results

Search found 2086 results on 84 pages for 'pixel shader'.

Page 45/84 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • PNG composition using GD and PHP

    - by Dominic
    I am trying to take a rectangular png and add depth using GD by duplicating the background and moving it down 1 pixel and right 1 pixel. I am trying to preserve a transparent background as well. I am having a bunch of trouble with preserving the transparency. Any help would be greatly appreciated. Thanks! $obj = imagecreatefrompng('rectangle.png'); $depth = 5; $obj_width = imagesx($obj); $obj_height = imagesy($obj); imagesavealpha($obj, true); for($i=1;$i<=$depth;$i++){ $layer = imagecreatefrompng('rectangle.png'); imagealphablending( $layer, false ); imagesavealpha($layer, true); $new_obj = imagecreatetruecolor($obj_width+$i,$obj_height+$i); $new_obj_width = imagesx($new_obj); $new_obj_height = imagesy($new_obj); imagealphablending( $new_obj, false ); imagesavealpha($new_obj, true); $trans_color = imagecolorallocatealpha($new_obj, 0, 0, 0, 127); imagefill($new_obj, 0, 0, $trans_color); imagecopyresampled($new_obj, $layer, $i, $i, 0, 0, $obj_width, $obj_height, $obj_width, $obj_height); //imagesavealpha($new_obj, true); //imagesavealpha($obj, true); } header ("Content-type: image/png"); imagepng($new_obj); imagedestroy($new_obj);

    Read the article

  • How to convert a byte array of 19200 bytes in size where each byte represents 4 pixels (2 bits per p

    - by Klinger
    I am communicating with an instrument (remote controlling it) and one of the things I need to do is to draw the instrument screen. In order to get the screen I issue a command and the instrument replies with an array of bytes that represents the screen. Below is what the instrument manual has to say about converting the response to the actual screen: The command retrieves the framebuffer data used for the display. It is 19200 bytes in size, 2-bits per pixel, 4 pixels per byte arranged as 320x240 characteres. The data is sent in RLE encoded form. To convert this data into a BMP for use in Windows, it needs to be turned into a 4BPP. Also note that BMP files are upside down relative to this data, i.e. the top display line is the last line in the BMP. I managed to unpack the data, but now I am stuck on how to actually go from the unpacked byte array to a bitmap. My background on this is pretty close to zero and my searches have not revealed much either. I am looking for directions and/or articles I could use to help me undestand how to get this done. Any code or even pseudo code would also help. :-) So, just to summarize it all: How to convert a byte array of 19200 bytes in size, where each byte represents 4 pixels (2 bits per pixel), to a bitmap arranged as 320x240 characters. Thanks in advance.

    Read the article

  • XAML PixelGrid to Prevent Blurry Text

    - by Bodekaer
    Hi, Just wanted to share a small Grid I created, which can help prevent blurry text etc. as it adjusts the margin of the Grid to ensure a pixel perfect position and size of the grid. Works great e.g. for inside StackPanels with auto height Labels/TextBlocks. Here is the code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Media; namespace Controls { class PixelGrid : Grid { protected override void OnRenderSizeChanged(SizeChangedInfo sizeInfo) { // POSITION Vector position = VisualTreeHelper.GetOffset(this); double targetX = Math.Round(position.X, MidpointRounding.ToEven); double targetY = Math.Round(position.Y, MidpointRounding.ToEven); double marginLeft = targetX - position.X; double marginTop = targetY - position.Y; // SIZE double targetHeight = Math.Round(sizeInfo.NewSize.Height, MidpointRounding.ToEven); double targetWidth = Math.Round(sizeInfo.NewSize.Width, MidpointRounding.ToEven); double marginBottom = targetHeight - sizeInfo.NewSize.Height; double marginRight = targetWidth - sizeInfo.NewSize.Width; // Adjust margin to ensure pixel width this.Margin = new Thickness(marginLeft, marginTop, marginRight, marginBottom); base.OnRenderSizeChanged(sizeInfo); } } }

    Read the article

  • Locking a GDI+ Bitmap in Native C++?

    - by user146780
    I can find many examples on how to do this in managed c++ but none for unmanaged. I want to get all the pixel data as efficiently as possible, but some of the scan0 stuff I would need more info about so I can properly iterate through the pixel data and get each rgba value from it. right now I have this: Bitmap *b = new Bitmap(filename); if(b == NULL) { return 0; } UINT w,h; w = b->GetWidth(); h = b->GetHeight(); Rect *r = new Rect(0,0,w,h); BitmapData *lockdat; b->LockBits(r,ImageLockModeRead,PixelFormatDontCare,lockdat); delete(r); if(w == 0 && h == 0) { return 0; } Color c; std::vector<GLubyte> pdata(w * h * 4,0.0); for (unsigned int i = 0; i < h; i++) { for (unsigned int j = 0; j < w; j++) { b->GetPixel(j,i,&c); pdata[i * 4 * w + j * 4 + 0] = (GLubyte) c.GetR(); pdata[i * 4 * w + j * 4 + 1] = (GLubyte) c.GetG(); pdata[i * 4 * w + j * 4 + 2] = (GLubyte) c.GetB(); pdata[i * 4 * w + j * 4 + 3] = (GLubyte) c.GetA(); } } delete(b); return CreateTexture(pdata,w,h); How do I use lockdat to do the equivalent of getpixel? Thanks

    Read the article

  • How to calculate the y-pixels of someones weight on a graph? (math+programming question)

    - by RexOnRoids
    I'm not that smart like some of you geniuses. I need some help from a math whiz. My app draws a graph of the users weight over time. I need a surefire way to always get the right pixel position to draw the weight point at for a given weight. For example, say I want to plot the weight 80.0(kg) on the graph when the range of weights is 80.0 to 40.0kg. I want to be able to plug in the weight (given I know the highest and lowest weights in the range also) and get the pixel result 400(y) (for the top of the graph). The graph is 300 pixels high (starts at 100 and ends at 400). The highest weight 80kg would be plot at 400 while the lowest weight 40kg would be plot at 100. And the intermediate weights should be plotted appropriately. I tried this but it does not work: -(float)weightToPixel:(float)theWeight { float graphMaxY = 400; //The TOP of the graph float graphMinY = 100; //The BOTTOM of the graph float yOffset = 100; //Graph itself is offset 100 pixels in the Y direction float coordDiff = graphMaxY-graphMinY; //The size in pixels of the graph float weightDiff = self.highestWeight-self.lowestWeight; //The weight gap float pixelIncrement = coordDiff/weightDiff; float weightY = (theWeight*pixelIncrement)-(coordDiff-yOffset); //The return value return weightYpixel; }

    Read the article

  • OpenGL GL_LINES enpoints not joining

    - by old-school rules
    I'm having problems with the GL_LINES block... the lines in the sample below do not connect on the ends (although sometimes it randomly decides to connect a corner or two). Instead, the endpoints come within 1 pixel of one another (leaving a corner that is not fully squared; if that makes sense). It is a simple block to draw a solid 1-pixel rectangle. glBegin(GL_LINES); glColor3b(cr, cg, cb); glVertex3i(pRect->left, pRect->top, 0); glVertex3i(pRect->right, pRect->top, 0); glVertex3i(pRect->right, pRect->top, 0); glVertex3i(pRect->right, pRect->bottom, 0); glVertex3i(pRect->right, pRect->bottom, 0); glVertex3i(pRect->left, pRect->bottom, 0); glVertex3i(pRect->left, pRect->bottom, 0); glVertex3i(pRect->left, pRect->top, 0); glEnd(); The sample below seems to correct the problem, giving me sharp, square corners; but I can't accept it because I don't know why it's acting this way... glBegin(GL_LINES); glColor3b(cr, cg, cb); glVertex3i(pRect->left, pRect->top, 0); glVertex3i(pRect->right + 1, pRect->top, 0); glVertex3i(pRect->right, pRect->top, 0); glVertex3i(pRect->right, pRect->bottom + 1, 0); glVertex3i(pRect->right, pRect->bottom, 0); glVertex3i(pRect->left - 1, pRect->bottom, 0); glVertex3i(pRect->left, pRect->bottom, 0); glVertex3i(pRect->left, pRect->top - 1, 0); glEnd(); Any OpenGL programmers out there that can help, I would appreciate it :)

    Read the article

  • Kill overscan for ATI drivers?

    - by joeforker
    I have a dual-boot Windows 7 64 bit/Linux 64 bit machine that uses ATI's Catalyst drivers. Sometimes I attach it to a 1080p LCD TV over HDMI. ATI is daft enough to provide a border to account for overscan. I'm using an LCD TV. No overscan, or it looks like crap because the pixel mapping is not 1:1. How do I disable this driver "feature" in Windows? in Linux?

    Read the article

  • Insert PDF image in MS Word

    - by serhio
    Hello. I have a .doc witch I will convert in PDF. In this .doc I has an image. When I convert the doc to PDF and then zoom it, the images became ugly pixel-ized. I found a tool that converted my bitmap .png image to vectorial .PDF image. Now how could I import the PDF image in MS Word (that finally I will convert to PDF once again)?

    Read the article

  • Tiff not displaying correctly on Mac

    - by user348935
    Hi, I have a collection of .tif files but when I open them on my Mac 10.5 they show up as solid black and I don't know why. thanks Upon further inspection at really high brightness there are some out of focus objects viewable. It looks as if I am getting the first couple bits of each pixel but not the entire range of values.

    Read the article

  • Automater for Vista

    - by allindal
    Is a there a similar program for Vista like the MAC application Automator Specifically I'm looking for a vista app that can control timed clicks,example...in automator, I can specify which pixel and how often to click, or a series of click in different places.I'm not looking for an"intelligent clicker" just a purely GUI programed clicker. ALso i need it do work and record the keyboard. From reading other SU posts i can see that c prompt doesn't have an easy to do this.

    Read the article

  • Tiff not displaying correctly on Mac OS X

    - by user348935
    I have a collection of .tif files but when I open them on Mac OS X 10.5 they show up as solid black and I don't know why. thanks Upon further inspection at really high brightness there are some out of focus objects viewable. It looks as if I am getting the first couple bits of each pixel but not the entire range of values.

    Read the article

  • Gimp: Color to alpha

    - by MTilsted
    I have an image where I want all the pixels with a specific color converted to transparent pixels. The operation should not change the color/alpha value of any pixel which don't match the color exactly. How do I do that? At first I thought I could use Colors-"Color to Alpha" but that don't work because it changes the color of all pixels(It adds an alpha value to all pixels). Using Gimp 2.6.11 on Linux

    Read the article

  • Why is changing displays slow?

    - by Josh Bronson
    I've had many laptops over the course of many years, and while many things have sped up, one thing remains as slow today as it was years ago: (dis)connecting an external display. What's taking it so long to detect the new display and update the pixel buffers? I use Macs primarily, but I think this is equally slow on other platforms.

    Read the article

  • Nvidia Drivers on Debian / Lenny (Stable) -> Installation successful -> Monitors gets black

    - by David
    I have successfully installed the proprietary drivers for my nvidia (geforce 7300 gt) graphics card on debian/lenny. I know its not the best way to chose for driver installation ( see this link: http://wiki.debian.org/NvidiaGraphicsDrivers#non-freedrivers ). but the two ways seem to be possible for me (nvidia-kernel module compilation). Now the problem is that the monitors gets black, the power light starts blinking after i launch the x-server. Have a short look a the logs (output truncated from /var/log/Xorg.0.log): (II) Setting vga for screen 0. (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) Jul 28 17:10:11 NVIDIA(0): Enabling RENDER acceleration (II) Jul 28 17:10:11 NVIDIA(0): Support for GLX with the Damage and Composite X extensions is (II) Jul 28 17:10:11 NVIDIA(0): enabled. (II) Jul 28 17:10:11 NVIDIA(0): NVIDIA GPU GeForce 7300 GT (G73) at PCI:1:0:0 (GPU-0) (--) Jul 28 17:10:11 NVIDIA(0): Memory: 262144 kBytes (--) Jul 28 17:10:11 NVIDIA(0): VideoBIOS: 05.73.22.25.00 (II) Jul 28 17:10:11 NVIDIA(0): Detected PCI Express Link width: 16X (--) Jul 28 17:10:11 NVIDIA(0): Interlaced video modes are supported on this GPU (--) Jul 28 17:10:11 NVIDIA(0): Connected display device(s) on GeForce 7300 GT at PCI:1:0:0: (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (CRT-0) (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0) (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (CRT-0): 400.0 MHz maximum pixel clock (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0): 165.0 MHz maximum pixel clock (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0): Internal Single Link TMDS (II) Jul 28 17:10:11 NVIDIA(0): Assigned Display Device: CRT-0 (==) Jul 28 17:10:11 NVIDIA(0): (==) Jul 28 17:10:11 NVIDIA(0): No modes were requested; the default mode "nvidia-auto-select" (==) Jul 28 17:10:11 NVIDIA(0): will be used as the requested mode. (==) Jul 28 17:10:11 NVIDIA(0): (II) Jul 28 17:10:11 NVIDIA(0): Validated modes: (II) Jul 28 17:10:11 NVIDIA(0): "nvidia-auto-select" (II) Jul 28 17:10:11 NVIDIA(0): Virtual screen size determined to be 1280 x 1024 (--) Jul 28 17:10:11 NVIDIA(0): DPI set to (85, 86); computed from "UseEdidDpi" X config (--) Jul 28 17:10:11 NVIDIA(0): option (==) Jul 28 17:10:11 NVIDIA(0): Enabling 32-bit ARGB GLX visuals. (--) Depth 24 pixmap format is 32 bpp Here is the complete /etc/X11/xorg.conf file as generated by nvidia-xconfig: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 256.35 (buildmeister@builder101) Wed Jun 16 19:25:59 PDT 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" Hor

    Read the article

  • Upgrading PS1 Light Gun [on hold]

    - by Nathan Taylor
    Is There any possible way to upgrade the retro G-con Light Gun for PS1 to allow it to interact with HD TV's? I am aware that they were Designed purely for Tube TV's but I would be happy to know of any hardware that would maybe convert the light to hit the Pixels on an LCD TV. If not is there any other Light gun that would work on PS1 games but has the newer light gun hardware that can interact with a higher Pixel LCD TV?

    Read the article

  • Changing the size of the Windows 7 taskbar

    - by dertoni
    Is there a way to change the size of the Windows 7 Taskbar? Internal or with the help of outside programs, both welcome. Something like the MacOS X Doc Zooming effect would be OK/nice, too. Edit: I'm essentially looking for a way to shrink it, because my laptop does not have a big screen so every pixel is valueable.

    Read the article

  • Virtual PC on Windows 7 doesn't have adjustment for video card size?

    - by Jian Lin
    The current VirtualBox has a place where the video card size can be set by the user. It seems that Win 7's Virtual PC doesn't have one? Will it auto adjust -- but what if the screen size is 800 x 600 and the user resize it to 1600 x 1200, then the original video size may not be enough and will that cause any issue? I do sometimes see blinking random pixel region showing on the VPC's screen... maybe it is cause by not enough video RAM size?

    Read the article

  • Why is my unsafe code block slower than my safe code?

    - by jomtois
    I am attempting to write some code that will expediently process video frames. I am receiving the frames as a System.Windows.Media.Imaging.WriteableBitmap. For testing purposes, I am just applying a simple threshold filter that will process a BGRA format image and assign each pixel to either be black or white based on the average of the BGR pixels. Here is my "Safe" version: public static void ApplyFilter(WriteableBitmap Bitmap, byte Threshold) { // Let's just make this work for this format if (Bitmap.Format != PixelFormats.Bgr24 && Bitmap.Format != PixelFormats.Bgr32) { return; } // Calculate the number of bytes per pixel (should be 4 for this format). var bytesPerPixel = (Bitmap.Format.BitsPerPixel + 7) / 8; // Stride is bytes per pixel times the number of pixels. // Stride is the byte width of a single rectangle row. var stride = Bitmap.PixelWidth * bytesPerPixel; // Create a byte array for a the entire size of bitmap. var arraySize = stride * Bitmap.PixelHeight; var pixelArray = new byte[arraySize]; // Copy all pixels into the array Bitmap.CopyPixels(pixelArray, stride, 0); // Loop through array and change pixels to black or white based on threshold for (int i = 0; i < pixelArray.Length; i += bytesPerPixel) { // i=B, i+1=G, i+2=R, i+3=A var brightness = (byte)((pixelArray[i] + pixelArray[i + 1] + pixelArray[i + 2]) / 3); var toColor = byte.MinValue; // Black if (brightness >= Threshold) { toColor = byte.MaxValue; // White } pixelArray[i] = toColor; pixelArray[i + 1] = toColor; pixelArray[i + 2] = toColor; } Bitmap.WritePixels(new Int32Rect(0, 0, Bitmap.PixelWidth, Bitmap.PixelHeight), pixelArray, stride, 0); } Here is what I think is a direct translation using an unsafe code block and the WriteableBitmap Back Buffer instead of the forebuffer: public static void ApplyFilterUnsafe(WriteableBitmap Bitmap, byte Threshold) { // Let's just make this work for this format if (Bitmap.Format != PixelFormats.Bgr24 && Bitmap.Format != PixelFormats.Bgr32) { return; } var bytesPerPixel = (Bitmap.Format.BitsPerPixel + 7) / 8; Bitmap.Lock(); unsafe { // Get a pointer to the back buffer. byte* pBackBuffer = (byte*)Bitmap.BackBuffer; for (int i = 0; i < Bitmap.BackBufferStride*Bitmap.PixelHeight; i+= bytesPerPixel) { var pCopy = pBackBuffer; var brightness = (byte)((*pBackBuffer + *pBackBuffer++ + *pBackBuffer++) / 3); pBackBuffer++; var toColor = brightness >= Threshold ? byte.MaxValue : byte.MinValue; *pCopy = toColor; *++pCopy = toColor; *++pCopy = toColor; } } // Bitmap.AddDirtyRect(new Int32Rect(0,0, Bitmap.PixelWidth, Bitmap.PixelHeight)); Bitmap.Unlock(); } This is my first foray into unsafe code blocks and pointers, so maybe the logic is not optimal. I have tested both blocks of code on the same WriteableBitmaps using: var threshold = Convert.ToByte(op.Result); var copy2 = copyFrame.Clone(); Stopwatch stopWatch = new Stopwatch(); stopWatch.Start(); BinaryFilter.ApplyFilterUnsafe(copyFrame, threshold); stopWatch.Stop(); var unsafesecs = stopWatch.ElapsedMilliseconds; stopWatch.Reset(); stopWatch.Start(); BinaryFilter.ApplyFilter(copy2, threshold); stopWatch.Stop(); Debug.WriteLine(string.Format("Unsafe: {1}, Safe: {0}", stopWatch.ElapsedMilliseconds, unsafesecs)); So I am analyzing the same image. A test run of an incoming stream of video frames: Unsafe: 110, Safe: 53 Unsafe: 136, Safe: 42 Unsafe: 106, Safe: 36 Unsafe: 95, Safe: 43 Unsafe: 98, Safe: 41 Unsafe: 88, Safe: 36 Unsafe: 129, Safe: 65 Unsafe: 100, Safe: 47 Unsafe: 112, Safe: 50 Unsafe: 91, Safe: 33 Unsafe: 118, Safe: 42 Unsafe: 103, Safe: 80 Unsafe: 104, Safe: 34 Unsafe: 101, Safe: 36 Unsafe: 154, Safe: 83 Unsafe: 134, Safe: 46 Unsafe: 113, Safe: 76 Unsafe: 117, Safe: 57 Unsafe: 90, Safe: 41 Unsafe: 156, Safe: 35 Why is my unsafe version always slower? Is it due to using the back buffer? Or am I doing something wrong? Thanks

    Read the article

  • Qt 4.6 OpenGL GLSL

    - by Zeke
    I'm trying to find like NeHe's tutorials for Qt that are all in GLSL. Because lets face it, OpenGL in the old days are dead and Shaders are the only way now. And with Qt-4.6 they introduced the QMatrix4x4, QVector3, and the Shader classes. But I cannot find any tutorials for this. All the ones I do find, all use crappy SDL and/or GLUT (Which are just plain useless).

    Read the article

  • Offloading to HLSL/GPU without displaying?

    - by George R
    As far as I know, certain mathematical functions like FFTs and perlin noise, etc. can be much faster when done on the GPU as a pixel shader. My question is, if I wanted to exploit this to calculate results and stream to bitmaps, could I do it without needing to actually display it in Silverlight or something? More specifically, I was thinking of using this for large terrain generation involving lots of perlin and other noises, and post-processing like high passes and deriving normals from heightmaps, etc, etc.

    Read the article

  • C# or C++ game: many 16 color images loaded into RAM. Efficient solution?

    - by user560639
    I am in the planning stages of creating a fighting game and am unsure how to handle one issue relating to memory. Background info: - Still debating whether to use C# (XNA) or C++. We do not want to commit to either until we have explored how to solve this problem in both languages. - Using a max of 256MB RAM would be great if possible. - Two characters will be present at a time, and these characters can only change between battles. There is time to load/free memory between battles, but the game needs to run at a constant 60 drawn frames per second during combat. Each frame is 16.67ms - The total number of images per character is in the low hundreds. Each image is roughly 200x400 pixels. Only one image from each character will be displayed at any given moment. Uncompressed, each image takes roughly 300kb from my calculations; upwards of 100MB for a whole character. This is pushing too close to the 256MB limit given that memory will be needed for some other resources as well. Since each image can be made with a total of 16 colors. Theoretically I should be able to use 1/8th the space if I can take advantage of this. I've looked around but haven't found any word of native support for paletted images. (Storing each pixel using fewer bits that each map to a 32-bit RGBa color) I was considering making my own file format with 4 bits per pixel (and some extra palette info), loading all the images of this new format into RAM before battle, and then when drawing any specific image, decompress only that image into a raw image so it can be rendered properly. I don't know if it's realistic to perform so many assignment operations (appx 200x400 for each character = 160k) each frame. It sounds very hacky to me. Does anyone have advice on whether my solution sounds reasonable, and if there is perhaps a better one available? Thanks so much! (I also attempted to use an image with only 1 channel, then use a shader to perform a series of if statements to translate various values into other colors. Unfortunately, there were too many lines of code for the shader. It is also rather hacky and does not scale well.)

    Read the article

  • OpenGL Shading Language portability

    - by Luca
    I've noticed that my GLSL shaders are not compilable when the GLSL version is lower than 130. What are the most critical elements for having a backward compatible shader source? I don't want to have a full backward compatibility, but I'd like to understand the main guidelines for having simple shaders running on GPU with GLSL lower than 130. Thank you

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >