Search Results

Search found 1598 results on 64 pages for 'pixel bender'.

Page 35/64 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • smallest filesize for transparent gif

    - by zaf
    I'm looking for the smallest (in terms of filesize) transparent 1 pixel image. Currently I have a gif of 49 bytes which seems to be the most popular. But I remember many years ago having one which was less than 40 bytes. Could have been 32 bytes. Can anyone do better? Graphics format is no concern as long as modern web browsers can display it and respect the transparency.

    Read the article

  • creating 2d sprites for games ?

    - by dfafa
    i am pretty new to developing games...i thought i would begin by making a simple 2d game.... curious what tools are commonly used to transform images to pixel sprites ? or is this done by hand, if so what tools are used ? even better, is there a marketplace where i can purchase game sprites and other game assets ?

    Read the article

  • WPF WriteableBitmap

    - by Sam
    I'm using WriteableBitmap on an image of type Bgra32 to change the pixel value of certain pixels. I'm setting the value to 0x77CCCCCC. After calling WritePixels, the pixels I set to 0x77CCCCCC show up with a value of 0x77FFFFFF. Why does this happen? How do I make the pixels have the correct value?

    Read the article

  • How do I use IImgCtx to get load an image with an alpha channel?

    - by fret
    I have some working code that uses IImgCtx to load images, but I can't work out how to get at the alpha channel. For images like .gif's and .png's there are transparent pixels, but using anything other than a 24-bit bitmap as a drawing surface doesn't work. For reference on the interface: http://www.codeproject.com/KB/graphics/JianImgCtxDecoder.aspx My code looks like this: IImgCtx *Ctx = 0; HRESULT hr = CoCreateInstance(CLSID_IImgCtx, NULL, CLSCTX_INPROC_SERVER, IID_IImgCtx, (LPVOID*)&Ctx); if (SUCCEEDED(hr)) { GVariant Fn = Name; hr = Ctx->Load(Fn.WStr(), 0); if (SUCCEEDED(hr)) { SIZE Size = { -1, -1 }; ULONG State = 0; while (true) { hr = Ctx->GetStateInfo(&State, &Size, false); if (SUCCEEDED(hr)) { if ((State & IMGLOAD_COMPLETE) || (State & IMGLOAD_STOPPED) || (State & IMGLOAD_ERROR)) { break; } else { LgiSleep(1); } } else break; } if (Size.cx > 0 && Size.cy > 0 && pDC.Reset(new GMemDC)) { if (pDC->Create(Size.cx, Size.cy, 32)) { HDC hDC = pDC->StartDC(); if (hDC) { RECT rc = { 0, 0, pDC->X(), pDC->Y() }; Ctx->Draw(hDC, &rc); pDC->EndDC(); } } else pDC.Reset(); } } Ctx->Release(); Where "StartDC" basically wraps CreateCompatibleDC(NULL) and "EndDC" wraps DeleteDC, with appropriate SelectObjects for the HBITMAPS etc. And pDC-Create(x, y, bit_depth) calls CreateDIBSection(...DIB_RGB_COLORS...). So it works if I create a 24 bits/pixel bitmap but has no alpha to speak of, and it leaves the 32 bits/pixel bitmap blank. Now this interface apparently is used by Internet Explorer to load images, and obviously THAT supports transparency, so I believe that it's possible to get some level of alpha out of the interface. The question is how? (I also have fall back code that will call libpng/libjpeg/my .gif loader etc)

    Read the article

  • Display arbitrary size 2d image in opengl

    - by Martin Beckett
    I need to display 2d images in opengl using textures. The image dimensions are not necessarily powers of 2. I thought of creating a larger texture and restricting the display to the part I was using but the image data will be shared with openCV so I don't want to copy data a pixel at a time into a larger texture. EDIT - it turns out that even the simplest Intel on board graphics under Windows supports none-power-of-2 textures.

    Read the article

  • Is it possible to use AnimateWindow with AW_BLEND when using a layered window?

    - by wkf
    I am displaying a window using UpdateLayeredWindow and would like to add transition animations. AnimateWindow works if I use the slide or roll effects (though there is some flickering). However, when I try to use AW_BLEND to produce a fade effect, I not only lose any translucency after the animation (per-pixel and on the entire image), but a default window border also appears. Is there a way to prevent the border from appearing?

    Read the article

  • Find Line Above or Below in Javascript

    - by Dark Falcon
    I am working on an in-place HTML editor, concentrating on Firefox only right now. I have an element inserted where the cursor should be and also have left and right arrows working, but I can't seem to find a way to find: Start and end of a line for the home and end keys The next line up or down for the up/down arrows. I see document.elementFromPoint, but this doesn't get me a Range object. The Range object itself seems rather useless when it comes to using pixel positions.

    Read the article

  • How to calculate the average rgb color values of a bitmap

    - by Matthias
    In my C# (3.5) application I need to get the average color values for the red, green and blue channels of a bitmap. Preferably without using an external library. Can this be done? If so, how? Thanks in advance. Trying to make things a little more precise: Each pixel in the bitmap has a certain RGB color value. I'd like to get the average RGB values for all pixels in the image.

    Read the article

  • Programming graphics and sound on PC - Total newbie questions, and lots of them!

    - by Russel
    Hello, This isn't exactly specifically a programming question (or is it?) but I was wondering: How are graphics and sound processed from code and output by the PC? My guess for graphics: There is some reserved memory space somewhere that holds exactly enough room for a frame of graphics output for your monitor. IE: 800 x 600, 24 bit color mode == 800x600x3 = ~1.4MB memory space Between each refresh, the program writes video data to this space. This action is completed before the monitor refresh. Assume a simple 2D game: the graphics data is stored in machine code as many bytes representing color values. Depending on what the program(s) being run instruct the PC, the processor reads the appropriate data and writes it to the memory space. When it is time for the monitor to refresh, it reads from each memory space byte-for-byte and activates hardware depending on those values for each color element of each pixel. All of this of course happens crazy-fast, and repeats x times a second, x being the monitor's refresh rate. I've simplified my own likely-incorrect explanation by avoiding talk of double buffering, etc Here are my questions: a) How close is the above guess (the three steps)? b) How could one incorporate graphics in pure C++ code? I assume the practical thing that everyone does is use a graphics library (SDL, OpenGL, etc), but, for example, how do these libraries accomplish what they do? Would manual inclusion of graphics in pure C++ code (say, a 2D spite) involve creating a two-dimensional array of bit values (or three dimensional to include multiple RGB values per pixel)? Is this how it would be done waaay back in the day? c) Also, continuing from above, do libraries such as SDL etc that use bitmaps actual just build the bitmap/etc files into machine code of the executable and use them as though they were build in the same matter mentioned in question b above? d) In my hypothetical step 3 above, is there any registers involved? Like, could you write some byte value to some register to output a single color of one byte on the screen? Or is it purely dedicated memory space (=RAM) + hardware interaction? e) Finally, how is all of this done for sound? (I have no idea :) )

    Read the article

  • bmp image header doubts

    - by vikramtheone
    Hi Guys, I'm doing a project where I have to make use of the pixel information of a bmp image. So, I'm gathering the image information by reading the header information of the input .bmp image. I'm quite successful with everything but one thing bothers me, can any one here clarify it? The header information of my .bmp image is as follows (My test image is very tiny and gray scale)- BMP File header File size 1210 Offset information 1078 BMP Information header Image Header Size 40 Image Size 132 Image width 9 Image height 11 Image bits_p_p 8 So, from the .bmp header I see that the image size is 132 (bytes) but when I multiply the width and height it is only 99, how is such a thing possible? I'm confident with 132 bytes because when I subtract the Offset value with the File Size value, I get 132(1210 - 1078 = 132) and also when I manually count the number of bytes (In a HEX editor) from the point 1078 or 436h (End of the offset field), there are exactly 132 bytes of pixel information. So, why is there a disparity between the size filed and the (width x height)? My future implementations are dependent on the image width and height information and not on Image size information. So, I have to understand thoroughly whats going on here. My understanding of the header should be clearly wrong... I guess!!! Help!!! Regards Vikram My bmp structures are a as follows - typedef struct bmpfile_magic { short magic; }BMP_MAGIC_NUMBER; typedef struct bmpfile_header { uint32_t filesz; uint16_t creator1; uint16_t creator2; uint32_t bmp_offset; }BMP_FILE_HEADER; typedef struct { uint32_t header_sz; uint32_t width; uint32_t height; uint16_t nplanes; uint16_t bitspp; uint32_t compress_type; uint32_t bmp_bytesz; uint32_t hres; uint32_t vres; uint32_t ncolors; uint32_t nimpcolors; } BMP_INFO_HEADER;

    Read the article

  • Why is changing displays slow?

    - by Josh Bronson
    I've had many laptops over the course of many years, and while many things have sped up, one thing remains as slow today as it was years ago: (dis)connecting an external display. What's taking it so long to detect the new display and update the pixel buffers? I use Macs primarily, but I think this is equally slow on other platforms.

    Read the article

  • Color separation in OpenCV?

    - by user225626
    Is there a function or a series of function steps that roughly equate to... get every pixel from myInput which doesn't meet an arbitrary Green quantity threshhold set each of those pixels to black on myOutput Thanks for any assistance.

    Read the article

  • getting part of an image with javascript

    - by Alper
    Hi all, i want to ask that, is it possible to show any part of image in img tag (with pixels) via Javascript. I mean, i'll prepare a big image (e.g. 32x320 pixels) and i'll define starting position (X,Y , e.g. 0,32) and width/height (e.g. 32,32), so script will show second (32x32 pixel) part of main image.. I hope i can explain. Thanks right now..

    Read the article

  • Android button font size

    - by jonhobbs
    Hi, I;ve been trying to create a custom button in android using this tutorial - http://www.gersic.com/blog.php?id=56 It works well but it doesn't say how to change the font size or weighting. Any ideas? There was another question on here and the only answer was to use html styling but you can't change a font size in html without using css (or the deprecated font tag). There must be a better way of setting the pixel size of the font used on buttons?

    Read the article

  • need to separate words from an image

    - by user298295
    how can i manipulate pixel values of a loaded image then save portion of that image into a new image(1 image per word).i found several example regarding saving or loading image but i cant understand how can i save image portion???i am trying to do it with java

    Read the article

  • OpenGL ES iPhone Textures

    - by techy
    For one of my new games I'm using OpenGL ES since it has multiple enemies and bullets, etc. How do you draw images on the screen with Opengl ES? I have a player.png image that is a 48x48 pixel image; how would I draw that on the screen?

    Read the article

  • measure rendered html in javascript without affecting the measurement

    - by drawnonward
    I am doing pagination in javascript. This is typographic pagination, not chopping up database results. For the most part it works, but I have run into a heisenberg issue where I cannot quite measure text without affecting it. I am not trying to measure text before it is rendered. I want the actual position it shows up at on screen, so I can paginate to where it is naturally wrapped. I am measuring the vertical position of characters, not the horizontal width of strings. The way I do this is similar to this answer in that I am applying a style to a block of text, then measuring the position of the newly created span. If the span does not reach the end of the page, I clear it and make a new span in a linear search. The problem is that the anti-aliased sub-pixel text layout is different when the span is applied. In rare cases, this causes the text to wrap differently when I measure it. I have only seen this when wrapping at a hyphen, and I assume it would not happen when wrapping at white space. As a concrete example, "prepared-he" is the string I am having trouble with. When I measure up to "prepare" it appears, as expected, to be within the current page. When I measure "prepared" the whole phrase wraps down to the next line, moving it to the next page, so it looks like the "d" is the character to break at. I break the text between "prepare" and "d-he" and that is wrong. Trying to evaluate individual characters opens a whole can of worms I would rather avoid. The wrapping changes because, with the new span, the line is 1 pixel wider. A solution to my problem could either be a better way to measure text using javascript, or a way to wrap text in a new element without affecting layout. I have tried setting margin-right:-1px for the class of the span being created to wrap the text. This had no noticeable effect. I am doing this in a UIWebView on the iPhone. There are some measurement related calls that are available in normal WebKit that are not available here. For example, Range does not have getBoundingClientRect or support setting an offset other than 0 in setStart or setEnd. Thank you

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >