Search Results

Search found 1519 results on 61 pages for 'energetic pixels'.

Page 3/61 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • what is the relation between actual pixels and html(css) pixels in blackberry?

    - by HelpMeToHelpYou
    I am implementing one phonegap application. here everything going fine but when i am talking device specifications like 1)BlackBerry Bold Touch 9900 Screen specifications are as following Body Dimensions 115 x 66 x 10.5 mm (4.53 x 2.60 x 0.41 in) Weight 130 g (4.59 oz) Keyboard QWERTY Display Type TFT capacitive touchscreen, 16M colors Size 640 x 480 pixels, 2.8 inches (~286 ppi pixel density) But when i test following function in java script function findScreenSize() { alert("width:"+window.innerWidth +"Height:"+ window.innerHeight); } it displaying SIZE width : 356 Height : 267 (356 x 267) 2)BlackBerry Bold Touch 9930 Screen specifications are as following Body Dimensions 115 x 66 x 10.5 mm (4.53 x 2.60 x 0.41 in) Weight 130 g (4.59 oz) Keyboard QWERTY - Touch-sensitive controls Display Type TFT capacitive touchscreen, 16M colors Size 640 x 480 pixels, 2.8 inches (~286 ppi pixel density) then i run same javaScript function i got following output it displaying SIZE width : 417 Height : 313 (417 x 313) why it is behaving like this ? Can anybody know relation between core pixel and HTML pixel please give answer

    Read the article

  • Best approach to storing image pixels in bottom-up order in Java

    - by finnw
    I have an array of bytes representing an image in Windows BMP format and I would like my library to present it to the Java application as a BufferedImage, without copying the pixel data. The main problem is that all implementations of Raster in the JDK store image pixels in top-down, left-to-right order whereas BMP pixel data is stored bottom-up, left-to-right. If this is not compensated for, the resulting image will be flipped vertically. The most obvious "solution" is to set the SampleModel's scanlineStride property to a negative value and change the band offsets (or the DataBuffer's array offset) to point to the top-left pixel, i.e. the first pixel of the last line in the array. Unfortunately this does not work because all of the SampleModel constructors throw an exception if given a negative scanlineStride argument. I am currently working around it by forcing the scanlineStride field to a negative value using reflection, but I would like to do it in a cleaner and more portable way if possible. e.g. is there another way to fool the Raster or SampleModel into arranging the pixels in bottom-up order but without breaking encapsulation? Or is there a library somewhere that will wrap the Raster and SampleModel, presenting the pixel rows in reverse order? I would prefer to avoid the following approaches: Copying the whole image (for performance reasons. The code must process hundreds of large (= 1Mpixels) images per second and although the whole image must be available to the application, it will normally access only a tiny (but hard-to-predict) portion of the image.) Modifying the DataBuffer to perform coordinate transformation (this actually works but is another "dirty" solution because the buffer should not need to know about the scanline/pixel layout.) Re-implementing the Raster and/or SampleModel interfaces from scratch (but I have a hunch that I will be unable to avoid this.)

    Read the article

  • How to effectively color pixels in a BufferedImage?

    - by Ed Taylor
    I'm using the following pice of code to iterate over all pixels in an image and draw a red 1x1 square over the pixels that are within a certain RGB-tolerance. I guess there is a more efficient way to do this? Any ideas appreciated. (bi is a BufferedImage and g2 is a Graphics2D with its color set to Color.RED). Color targetColor = new Color(selectedRGB); for (int x = 0; x < bi.getWidth(); x++) { for (int y = 0; y < bi.getHeight(); y++) { Color pixelColor = new Color(bi.getRGB(x, y)); if (withinTolerance(pixelColor, targetColor)) { g2.drawRect(x, y, 1, 1); } } } private boolean withinTolerance(Color pixelColor, Color targetColor) { int pixelRed = pixelColor.getRed(); int pixelGreen = pixelColor.getGreen(); int pixelBlue = pixelColor.getBlue(); int targetRed = targetColor.getRed(); int targetGreen = targetColor.getGreen(); int targetBlue = targetColor.getBlue(); return (((pixelRed >= targetRed - tolRed) && (pixelRed <= targetRed + tolRed)) && ((pixelGreen >= targetGreen - tolGreen) && (pixelGreen <= targetGreen + tolGreen)) && ((pixelBlue >= targetBlue - tolBlue) && (pixelBlue <= targetBlue + tolBlue))); }

    Read the article

  • What is the pixel clock setting on my monitor actually doing?

    - by codecowboy
    I am experiencing display interference on a dell 24" flat panel monitor.I find that if I adjust the pixel clock settings up or down in the monitor's on-screen menus, the interference goes away for a while. The monitor is attached to a Macbook Pro using a mini display to VGA adapter. I have found that in a different house, I get the interference problem less so it might be related to electricity supply or possibly even ethernet powerline (total guess). What does the pixel clock setting actually do and does this behaviour point to a likely cause of the interference?

    Read the article

  • Java single Array best choice for accessing pixels for manipulation?

    - by Petrol
    I am just watching this tutorial https://www.youtube.com/watch?v=HwUnMy_pR6A and the guy (who seems to be pretty competent) is using a single array to store and access the pixels of his to-be-rendered image. I was wondering if this really is the best way to do this. The alternative of Multi-Array does have one pointer more, but Arrays do have an O(1) for accessing each index and calculating the index in a single array seems to take one addition and one multiplication operation per pixel. And if Multi-Arrays really are bad, can't you use something with Hashing to avoid those addition and multiplication operations? EDIT: here is his code... public class Screen { private int width, height; public int[] pixels; public Screen(int width, int height) { this.width = width; this.height = height; // creating array the size of one index/int for every pixel // single array has better performance than multi-array pixels = new int[width * height]; } public void render() { for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { pixels[x + y * width] = 0xff00ff; } } } }

    Read the article

  • Xcode - Drawing Pixels

    - by Brett
    Hi guys; I am trying to draw individual pixels in xcode to be outputted to the iphone. I do not know any OpenGL or Quartz coding but I do know a bit about Core Graphics. I was thinking about drawing small rectangles with width and height of one, but do not know how to implement this into code and how to get this to show in the view. Any help is greatly appreciated. Thanks, Brett

    Read the article

  • UIImage change raw pixels from white to clear?

    - by christo16
    I've tried some code from each of these questions: How to make one color transparent on a UIImage? How to mask a UIImage so that white becomes transparent on iphone? but have come up unsuccessful, unfortunately working with Core Graphics and images is not my strong suit. How would I go about accessing a UIImage's raw data and changing the white pixels to clear? Thanks!

    Read the article

  • UIControlEventTouchDragExit only fires ~100 pixels out

    - by Jonathan
    Im trying to "get" when a finger/touch leaves a UIButton in objective C for iphone. I was told in another answer to use UIControlEventTouchDragExit however this event only fires when the touch gets about 100 pixels away from the button, whereas I would like it to be immediate. The apple docs say that goes according to the bounds however my understand is the bounds and the frame are the same unless you rotate the UIbutton (or whatever)

    Read the article

  • UIControlEventTouchDragExit only fires ~100 pixels out

    - by Jonathan
    Im trying to "get" when a finger/touch leaves a UIButton in objective C for iphone. I was told in another answer to use UIControlEventTouchDragExit however this event only fires when the touch gets about 100 pixels away from the button, whereas I would like it to be immediate. The apple docs say that goes according to the bounds however my understand is the bounds and the frame are the same unless you rotate the UIbutton (or whatever)

    Read the article

  • GDI+: Set all pixels to given color

    - by Charles
    What is the best way to set the RGB components of every pixel in a System.Drawing.Bitmap to a single, solid color? If possible, I'd like to avoid manually looping through each pixel to do this. Note: I want to keep the same alpha component from the original bitmap. I only want to change the RGB values. I looked into using a ColorMatrix or ColorMap, but I couldn't find any way to set all pixels to a specific given color with either approach.

    Read the article

  • How to read the screen pixels?

    - by Newbie
    I want to read a rectangular area, or just one pixel from my screen. As if screenshot button was pressed, but it wouldnt copy the whole screen necessary. So im not talking just about my own program window pixels. How i do this?

    Read the article

  • WPF PIXELS DPI RESOLUTION

    - by Akshay
    I am reading a book on WPF.As with all books, it gives an introduction on the problems the earlier display systems had with them.He refers to terms such as DPI, Pixels, Resolution etc.Is there any place where I can learn about them and about how they are related to each other ?

    Read the article

  • How do I multiply pixels on an SDL Surface?

    - by NoobScratcher
    Okay so I'm able to put blank pixels into a surface and also draw gradient pixels rectangles,etc But I don't know how to multiply the pixels on a surface so I was hoping someone could provide me information on this topic. I was thinking you could get the members pixel and then * it by 2 but that didn't provide results I wanted so I'm now thinking that you have to actually get to the position in bytes in one location to the left and one location to the right and then store it in memory and then * that by 2 am I correct or what? If so what is it that allows me to do that and how do I do that?

    Read the article

  • Are all <canvas> tag dimensions in pixels?

    - by Simon Omega
    Are all tag dimensions in pixels? I am asking because I understood them to be. But my math is broken or I am just not grasping something here. I have been doing python mostly and just jumped back into Java Scripting. If I am just doing something stupid let me know. For a game I am writing, I wanted to have a blocky gradient. I have the following: HTML <canvas id="heir"></canvas> CSS @media screen { body { font-size: 12pt } /* Game Rendering Space */ canvas { width: 640px; height: 480px; border-style: solid; border-width: 1px; } } JavaScript (Shortened) function testDraw ( thecontext ) { var myblue = 255; thecontext.save(); // Save All Settings (Before this Function was called) for (var i = 0; i < 480; i = i + 10 ) { if (myblue.toString(16).length == 1) { thecontext.fillStyle = "#00000" + myblue.toString(16); } else { thecontext.fillStyle = "#0000" + myblue.toString(16); } thecontext.fillRect(0, i, 640, 10); myblue = myblue - 2; }; thecontext.restore(); // Restore Settings to Save Point (Removing Styles, etc...) } function main () { var targetcontext = document.getElementById(“main”).getContext("2d"); testDraw(targetcontext); } To me this should produce a series of 640w by 10h pixel bars. In Google Chrome and Fire Fox I get 15 bars. To me that means ( 480 / 15 ) is 32 pixel high bars. So I change the code to: function testDraw ( thecontext ) { var myblue = 255; thecontext.save(); // Save All Settings (Before this Function was called) for (var i = 0; i < 16; i++ ) { if (myblue.toString(16).length == 1) { thecontext.fillStyle = "#00000" + myblue.toString(16); } else { thecontext.fillStyle = "#0000" + myblue.toString(16); } thecontext.fillRect(0, (i * 10), 640, 10); myblue = myblue - 10; }; thecontext.restore(); // Restore Settings to Save Point (Removing Styles, etc...) } And get a true 32 pixel height result for comparison. Other than the fact that the first code snippet has shades of blue rendering in non-visible portions of the they are measuring 32 pixels. Now back to the Original Java Code... If I inspect the tag in Chrome it reports 640 x 480. If I inspect it in Fire Fox it reports 640 x 480. BUT! Fire Fox exports the original code to png at 300 x 150 (which is 15 rows of 10). Is it some how being resized to 640 x 480 by the CSS instead of being set to a true 640 x 480? Why, how, what? O_o I confused...

    Read the article

  • How to access pixels of an NSBitmapImageRep?

    - by Paperflyer
    I have an NSBitmapImageRep that is created like this: NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:waveformSize.width pixelsHigh:waveformSize.height bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:YES colorSpaceName:NSCalibratedRGBColorSpace bytesPerRow:0 bitsPerPixel:0]; Now I want to access the pixel data so I get a pointer to the pixel planes using unsigned char *bitmapData; [imageRep getBitmapDataPlanes:&bitmapData]; According to the Documentation this returns a C array of five character pointers. But how can it do that? since the type of the argument is unsigned char **, it can only return an array of chars, but not an array of char pointers. So, this leaves me wondering how to access the individual pixels. Do you have an idea how to do that? (I know there is the method – setColor:atX:y:, but it seems to be pretty slow if invoked for every single pixel of a big bitmap.)

    Read the article

  • Import a Collada model doesn't align to pixels

    - by Dan Friedman
    Assume I have a model that is simply a cube. (It is more complicated than a cube, but for the purposes of this discussion, we will simplify.) So when I am in Sketchup, the cube is Xmm by Xmm by Xmm, where X is an integer. I then export the a Collada file and subsequently load that into threejs. Now if I look at the geometry bounding box, the values are floats, not integers. So now assume I am putting cubes next to each other with a small space in between say 1 pixel. Because screens can't draw half pixels, sometimes I see one pixel and sometimes I see two, which causes a lack of uniformity. I think I can resolve this satisfactorily if I can somehow get the imported model to have integer dimensions. I have full access to all parts of the model starting with Sketchup, so any point in the process is fair game. Is it possible? Thanks.

    Read the article

  • Build turns partially transparent image pixels black

    - by Sean O'Hollaren
    I'm very new to C# and I've run into a problem and haven't been able to solve it. I have a row of buttons that have .png images assigned to them. The images are in .png format to allow transparency, and smoothing the edges in GIMP leaves some semi-transparent pixels. I've set the Image List Toolbar (imglToolbar)'s properties to recognize "Transparent" as the designated color to show up as transparent. I'm working in Visual Studio 2005. The strange thing is that everything looks great when I'm viewing the Visual C# form preview window. The icons look exactly as they should. However, once I actually build the project, the buttons treat every semi-transparent pixel near the edge of the image as if it's black. It seems like it can't handle one that's both transparent and has color. Image of it via the Visual C# form editor: Image of what it looks like when built: Any ideas as to why this is happening?

    Read the article

  • Working with image pixels

    - by Mario
    Hey Guys, I'm trying to do a project here, which I want to implement the following: I have a rotation matrix and translation matrix are estimated, now I have an image in a certain location and I want to multiply all the image pixel by the rotation matrix and add the results to the translation matrix..... My issue is how to work with the pixels? I mean how to extract the pixel from the image in order to do the operation that I mentioned above? it's ok to give me the suggestion in either opencv or c++ *I need to know how to do this operation new_p(x,y) = old(x,y)* rotation_matrix + translation_matrix. I'm defining the image like that IplImage(), 3 channel image. For now I need to do the geometrical transformation* Thank you.

    Read the article

  • View is moved 3 pixels

    - by Jakub
    Hello, In my app I move the table view (in order to make the text fields visible when the keyboard appears). The view is looks following: This is the code I use for resizing the view and moving it up: static const NSUInteger navBarHeight = 44; CGRect appFrame = [[UIScreen mainScreen] applicationFrame]; tableView.frame = CGRectMake(0, navBarHeight, appFrame.size.width, appFrame.size.height-navBarHeight-216); //216 for the keyboard NSIndexPath *indPath = [self getIndexPathForTextField:textField]; //get the field the view should scroll to [tableView scrollToRowAtIndexPath:indPath atScrollPosition:UITableViewScrollPositionMiddle animated:YES]; The problem is that when the view is moved up it also moves 3 pixels into right direction (it is hard to see the difference in the screenshot, but it is visible when the animation is on and I measured the difference with PixelStick tool). Here it is how it looks after the move: My analysis shows that scrolling the table does not influence the move to the right. Any ideas what is wrong in the code above that makes the view move to the right?

    Read the article

  • C++ Reading and Editing pixels of a bitmap image

    - by BettyD
    I'm trying to create a program which reads an image in (starting with bitmaps because they're the easiest), searches through the image for a particular parameter (i.e. one pixel of (255, 0, 0) followed by one of (0, 0, 255)) and changes the pixels in some way (only within the program, not saving to the original file.) and then displays it. I'm using Windows, simple GDI for preference and I want to avoid custom libraries just because I want to really understand what I'm doing for this test program. Can anyone help? In any way? A website? Advice? Example code? Thanks

    Read the article

  • OpenGL pixels drawn with each horizontal pair swapped

    - by Tim Kane
    I'm somewhat new to OpenGL though I'm fairly sure my problem lies in the pixel format being used, or how my texture is being generated... I'm drawing a texture onto a flat 2D quad using a 16bit RGB5_A1 pixel format, though I don't make use of any alpha at this stage. The problem I'm having is that each pair of horizontal pixel values have been swapped. That is... if the pixels positions should be in this order (assume 8x2 image) 0 1 2 3 4 5 6 7 they are instead drawn as 1 0 3 2 5 4 7 6 Or, more clearly from this image (below). Left is what I get... Right is what I should get. . The question is... How have I ended up with this? Is there something wrong with the pixel format? Unlikely since the colours all appear correct, and I would expect all kinds of nasty if it were down to endian-ness. Suggestions greatly appreciated. Update: Turns out the problem was in my source renderer. Interestingly, I've avoided the problem entirely by using 32-bit textures (haven't tried 24-bit at this point).

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >