Search Results

Search found 1587 results on 64 pages for 'pixel reaper'.

Page 35/64 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Find Line Above or Below in Javascript

    - by Dark Falcon
    I am working on an in-place HTML editor, concentrating on Firefox only right now. I have an element inserted where the cursor should be and also have left and right arrows working, but I can't seem to find a way to find: Start and end of a line for the home and end keys The next line up or down for the up/down arrows. I see document.elementFromPoint, but this doesn't get me a Range object. The Range object itself seems rather useless when it comes to using pixel positions.

    Read the article

  • How to calculate the average rgb color values of a bitmap

    - by Matthias
    In my C# (3.5) application I need to get the average color values for the red, green and blue channels of a bitmap. Preferably without using an external library. Can this be done? If so, how? Thanks in advance. Trying to make things a little more precise: Each pixel in the bitmap has a certain RGB color value. I'd like to get the average RGB values for all pixels in the image.

    Read the article

  • Programming graphics and sound on PC - Total newbie questions, and lots of them!

    - by Russel
    Hello, This isn't exactly specifically a programming question (or is it?) but I was wondering: How are graphics and sound processed from code and output by the PC? My guess for graphics: There is some reserved memory space somewhere that holds exactly enough room for a frame of graphics output for your monitor. IE: 800 x 600, 24 bit color mode == 800x600x3 = ~1.4MB memory space Between each refresh, the program writes video data to this space. This action is completed before the monitor refresh. Assume a simple 2D game: the graphics data is stored in machine code as many bytes representing color values. Depending on what the program(s) being run instruct the PC, the processor reads the appropriate data and writes it to the memory space. When it is time for the monitor to refresh, it reads from each memory space byte-for-byte and activates hardware depending on those values for each color element of each pixel. All of this of course happens crazy-fast, and repeats x times a second, x being the monitor's refresh rate. I've simplified my own likely-incorrect explanation by avoiding talk of double buffering, etc Here are my questions: a) How close is the above guess (the three steps)? b) How could one incorporate graphics in pure C++ code? I assume the practical thing that everyone does is use a graphics library (SDL, OpenGL, etc), but, for example, how do these libraries accomplish what they do? Would manual inclusion of graphics in pure C++ code (say, a 2D spite) involve creating a two-dimensional array of bit values (or three dimensional to include multiple RGB values per pixel)? Is this how it would be done waaay back in the day? c) Also, continuing from above, do libraries such as SDL etc that use bitmaps actual just build the bitmap/etc files into machine code of the executable and use them as though they were build in the same matter mentioned in question b above? d) In my hypothetical step 3 above, is there any registers involved? Like, could you write some byte value to some register to output a single color of one byte on the screen? Or is it purely dedicated memory space (=RAM) + hardware interaction? e) Finally, how is all of this done for sound? (I have no idea :) )

    Read the article

  • bmp image header doubts

    - by vikramtheone
    Hi Guys, I'm doing a project where I have to make use of the pixel information of a bmp image. So, I'm gathering the image information by reading the header information of the input .bmp image. I'm quite successful with everything but one thing bothers me, can any one here clarify it? The header information of my .bmp image is as follows (My test image is very tiny and gray scale)- BMP File header File size 1210 Offset information 1078 BMP Information header Image Header Size 40 Image Size 132 Image width 9 Image height 11 Image bits_p_p 8 So, from the .bmp header I see that the image size is 132 (bytes) but when I multiply the width and height it is only 99, how is such a thing possible? I'm confident with 132 bytes because when I subtract the Offset value with the File Size value, I get 132(1210 - 1078 = 132) and also when I manually count the number of bytes (In a HEX editor) from the point 1078 or 436h (End of the offset field), there are exactly 132 bytes of pixel information. So, why is there a disparity between the size filed and the (width x height)? My future implementations are dependent on the image width and height information and not on Image size information. So, I have to understand thoroughly whats going on here. My understanding of the header should be clearly wrong... I guess!!! Help!!! Regards Vikram My bmp structures are a as follows - typedef struct bmpfile_magic { short magic; }BMP_MAGIC_NUMBER; typedef struct bmpfile_header { uint32_t filesz; uint16_t creator1; uint16_t creator2; uint32_t bmp_offset; }BMP_FILE_HEADER; typedef struct { uint32_t header_sz; uint32_t width; uint32_t height; uint16_t nplanes; uint16_t bitspp; uint32_t compress_type; uint32_t bmp_bytesz; uint32_t hres; uint32_t vres; uint32_t ncolors; uint32_t nimpcolors; } BMP_INFO_HEADER;

    Read the article

  • Why is changing displays slow?

    - by Josh Bronson
    I've had many laptops over the course of many years, and while many things have sped up, one thing remains as slow today as it was years ago: (dis)connecting an external display. What's taking it so long to detect the new display and update the pixel buffers? I use Macs primarily, but I think this is equally slow on other platforms.

    Read the article

  • Color separation in OpenCV?

    - by user225626
    Is there a function or a series of function steps that roughly equate to... get every pixel from myInput which doesn't meet an arbitrary Green quantity threshhold set each of those pixels to black on myOutput Thanks for any assistance.

    Read the article

  • getting part of an image with javascript

    - by Alper
    Hi all, i want to ask that, is it possible to show any part of image in img tag (with pixels) via Javascript. I mean, i'll prepare a big image (e.g. 32x320 pixels) and i'll define starting position (X,Y , e.g. 0,32) and width/height (e.g. 32,32), so script will show second (32x32 pixel) part of main image.. I hope i can explain. Thanks right now..

    Read the article

  • Android button font size

    - by jonhobbs
    Hi, I;ve been trying to create a custom button in android using this tutorial - http://www.gersic.com/blog.php?id=56 It works well but it doesn't say how to change the font size or weighting. Any ideas? There was another question on here and the only answer was to use html styling but you can't change a font size in html without using css (or the deprecated font tag). There must be a better way of setting the pixel size of the font used on buttons?

    Read the article

  • need to separate words from an image

    - by user298295
    how can i manipulate pixel values of a loaded image then save portion of that image into a new image(1 image per word).i found several example regarding saving or loading image but i cant understand how can i save image portion???i am trying to do it with java

    Read the article

  • OpenGL ES iPhone Textures

    - by techy
    For one of my new games I'm using OpenGL ES since it has multiple enemies and bullets, etc. How do you draw images on the screen with Opengl ES? I have a player.png image that is a 48x48 pixel image; how would I draw that on the screen?

    Read the article

  • measure rendered html in javascript without affecting the measurement

    - by drawnonward
    I am doing pagination in javascript. This is typographic pagination, not chopping up database results. For the most part it works, but I have run into a heisenberg issue where I cannot quite measure text without affecting it. I am not trying to measure text before it is rendered. I want the actual position it shows up at on screen, so I can paginate to where it is naturally wrapped. I am measuring the vertical position of characters, not the horizontal width of strings. The way I do this is similar to this answer in that I am applying a style to a block of text, then measuring the position of the newly created span. If the span does not reach the end of the page, I clear it and make a new span in a linear search. The problem is that the anti-aliased sub-pixel text layout is different when the span is applied. In rare cases, this causes the text to wrap differently when I measure it. I have only seen this when wrapping at a hyphen, and I assume it would not happen when wrapping at white space. As a concrete example, "prepared-he" is the string I am having trouble with. When I measure up to "prepare" it appears, as expected, to be within the current page. When I measure "prepared" the whole phrase wraps down to the next line, moving it to the next page, so it looks like the "d" is the character to break at. I break the text between "prepare" and "d-he" and that is wrong. Trying to evaluate individual characters opens a whole can of worms I would rather avoid. The wrapping changes because, with the new span, the line is 1 pixel wider. A solution to my problem could either be a better way to measure text using javascript, or a way to wrap text in a new element without affecting layout. I have tried setting margin-right:-1px for the class of the span being created to wrap the text. This had no noticeable effect. I am doing this in a UIWebView on the iPhone. There are some measurement related calls that are available in normal WebKit that are not available here. For example, Range does not have getBoundingClientRect or support setting an offset other than 0 in setStart or setEnd. Thank you

    Read the article

  • Basic unit of Sound?

    - by anon
    If we consider computer graphics to be the art of image synthesis where the basic unit is a pixel. What is the basic unit of sound synthesis? [This relates to programming as I want to generate this via a computer program.] Thanks!

    Read the article

  • Why is giving a fixed width to a label an accepted behavior?

    - by kemp
    There are a lot of questions about formatting forms so that labels align, and almost all the answers which suggest a pure CSS solution (as opposed to using a table) provide a fixed width to the label element. But isn't this mixing content and presentation? In order to choose the right width you basically have to see how big your longest label is and try a pixel width value until "it fits". This means that if you change your labels you also have to change your CSS.

    Read the article

  • Using an image file vs data URI in the CSS

    - by fudgey
    I'm trying to decide the best way to include an image that is required for a script I've written. I discovered this site and it made me think about trying this method to include the image as a data URI since it was so small - it's a 1x1 pixel 50% opacity png file (used for a background) - it ends up at 2,792 bytes as an image versus 3,746 bytes as text in the CSS. So would this be considered good practice, or would it just clutter up the CSS unnecessarily?

    Read the article

  • Determining line orientation using vertex shaders

    - by Brett
    Hi, I want to be able to calculate the direction of a line to eye coordinates and store this value for every pixel on the line using a vertex and fragment shader. My idea was to calculate the direction gradient using atan2(Gy/Gx) after a modelview tranformation for each pair of vertices then quantize this value as a color intensity to pass to a fragment shader. How can I get access to the positions of pairs of vertices to achieve this or is there another method I should use? Thanks

    Read the article

  • HTML/CSS: What should I use to define image height/width to make it resolution independent?

    - by Tedy
    I've read all over the Internet that I should not define fonts (or anything) with absolute pixel height/width/size and instead, use EM ... so that on higher resolution displays, my web site can scale appropriately. However, what do I use to define IMAGE height/width ... because images won't scale well (they look pixelated) UPDATE: To clarify, I'm not referring to page zoom. I'm referring to how to make my web application resolution independent so that it will look correct on higher DPI displays.

    Read the article

  • What should I use to define image height/width resolution?

    - by Tedy
    I've read all over the Internet that I should not define fonts (or anything) with absolute pixel height/width/size and instead, use EM ... so that on higher resolution displays, my web site can scale appropriately. However, what do I use to define IMAGE height/width ... because images won't scale well (they look pixelated)

    Read the article

  • Are there algorithms for increasing resolution of an image?

    - by David
    Are there any algorithms or tools that can increase the resolution of an image - besides just a simple zoom that makes each individual pixel in the image a little larger? I realize that such an algorithm would have to invent pixels that don't really exist in the original image, but I figured there might be some algorithm that could intelligently figure out what pixels to add to the image to increase its resolution.

    Read the article

  • Any simple way to "resize" an NSBezierPath?

    - by d11wtq
    I have an NSBezierPath that I'm filling and stroking. I'd like to add some inner glow to the path (a light stroke, just inside of the outer stroke), and the thing that comes to mind is to use the same path shrunk by 1 pixel (the size of the line that is already )stroked. Is there a way to do this? Alternatively, is there some sort of pattern I can use when applying both a border (stroke) and a glow to a bezier path? Example, the (extremely subtle) inner glow on the Google Chrome tabs:

    Read the article

  • Trying to convert a 2D image into 3D objects in Java

    - by Kyle
    Hey, I'm trying to take a simple image, something like a black background with colored blocks representing walls. I'm trying to figure out how to go about starting on something like this. Do I need to parse the image and look at each pixel or is there an easier way to do it? I'm using Java3D but it doesn't seem to have any sort of built in support for that...

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >