Search Results

Search found 1519 results on 61 pages for 'energetic pixels'.

Page 9/61 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Draw Bitmap with alpha channel

    - by Paja
    I have a Format32bppArgb backbuffer, where I draw some lines: var g = Graphics.FromImage(bitmap); g.Clear(Color.FromArgb(0)); var rnd = new Random(); for (int i = 0; i < 5000; i++) { int x1 = rnd.Next(ClientRectangle.Left, ClientRectangle.Right); int y1 = rnd.Next(ClientRectangle.Top, ClientRectangle.Bottom); int x2 = rnd.Next(ClientRectangle.Left, ClientRectangle.Right); int y2 = rnd.Next(ClientRectangle.Top, ClientRectangle.Bottom); Color color = Color.FromArgb(rnd.Next(0, 255), rnd.Next(0, 255), rnd.Next(0, 255)); g.DrawLine(new Pen(color), x1, y1, x2, y2); } Now I want to copy bitmap in Paint event. I do it like this: void Form1Paint(object sender, PaintEventArgs e) { e.Graphics.DrawImageUnscaled(bitmap, 0, 0); } Hovewer, the DrawImageUnscaled copies pixels and applies the alpha channel, thus pixels with alpha == 0 won't have any effect. But I need raw byte copy, so pixels with alpha == 0 are also copied. So the result of these operations should be that e.Graphics contains exact byte-copy of the bitmap. How to do that? Summary: When drawing a bitmap, I don't want to apply the alpha channel, I merely want to copy the pixels.

    Read the article

  • Optimized OCR black/white pixel algorithm

    - by eagle
    I am writing a simple OCR solution for a finite set of characters. That is, I know the exact way all 26 letters in the alphabet will look like. I am using C# and am able to easily determine if a given pixel should be treated as black or white. I am generating a matrix of black/white pixels for every single character. So for example, the letter I (capital i), might look like the following: 01110 00100 00100 00100 01110 Note: all points, which I use later in this post, assume that the top left pixel is (0, 0), bottom right pixel is (4, 4). 1's represent black pixels, and 0's represent white pixels. I would create a corresponding matrix in C# like this: CreateLetter("I", new List<List<bool>>() { new List<bool>() { false, true, true, true, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, true, true, true, false } }); I know I could probably optimize this part by using a multi-dimensional array instead, but let's ignore that for now, this is for illustrative purposes. Every letter is exactly the same dimensions, 10px by 11px (10px by 11px is the actual dimensions of a character in my real program. I simplified this to 5px by 5px in this posting since it is much easier to "draw" the letters using 0's and 1's on a smaller image). Now when I give it a 10px by 11px part of an image to analyze with OCR, it would need to run on every single letter (26) on every single pixel (10 * 11 = 110) which would mean 2,860 (26 * 110) iterations (in the worst case) for every single character. I was thinking this could be optimized by defining the unique characteristics of every character. So, for example, let's assume that the set of characters only consists of 5 distinct letters: I, A, O, B, and L. These might look like the following: 01110 00100 00100 01100 01000 00100 01010 01010 01010 01000 00100 01110 01010 01100 01000 00100 01010 01010 01010 01000 01110 01010 00100 01100 01110 After analyzing the unique characteristics of every character, I can significantly reduce the number of tests that need to be performed to test for a character. For example, for the "I" character, I could define it's unique characteristics as having a black pixel in the coordinate (3, 0) since no other characters have that pixel as black. So instead of testing 110 pixels for a match on the "I" character, I reduced it to a 1 pixel test. This is what it might look like for all these characters: var LetterI = new OcrLetter() { Name = "I", BlackPixels = new List<Point>() { new Point (3, 0) } } var LetterA = new OcrLetter() { Name = "A", WhitePixels = new List<Point>() { new Point(2, 4) } } var LetterO = new OcrLetter() { Name = "O", BlackPixels = new List<Point>() { new Point(3, 2) }, WhitePixels = new List<Point>() { new Point(2, 2) } } var LetterB = new OcrLetter() { Name = "B", BlackPixels = new List<Point>() { new Point(3, 1) }, WhitePixels = new List<Point>() { new Point(3, 2) } } var LetterL = new OcrLetter() { Name = "L", BlackPixels = new List<Point>() { new Point(1, 1), new Point(3, 4) }, WhitePixels = new List<Point>() { new Point(2, 2) } } This is challenging to do manually for 5 characters and gets much harder the greater the amount of letters that are added. You also want to guarantee that you have the minimum set of unique characteristics of a letter since you want it to be optimized as much as possible. I want to create an algorithm that will identify the unique characteristics of all the letters and would generate similar code to that above. I would then use this optimized black/white matrix to identify characters. How do I take the 26 letters that have all their black/white pixels filled in (e.g. the CreateLetter code block) and convert them to an optimized set of unique characteristics that define a letter (e.g. the new OcrLetter() code block)? And how would I guarantee that it is the most efficient definition set of unique characteristics (e.g. instead of defining 6 points as the unique characteristics, there might be a way to do it with 1 or 2 points, as the letter "I" in my example was able to). An alternative solution I've come up with is using a hash table, which will reduce it from 2,860 iterations to 110 iterations, a 26 time reduction. This is how it might work: I would populate it with data similar to the following: Letters["01110 00100 00100 00100 01110"] = "I"; Letters["00100 01010 01110 01010 01010"] = "A"; Letters["00100 01010 01010 01010 00100"] = "O"; Letters["01100 01010 01100 01010 01100"] = "B"; Now when I reach a location in the image to process, I convert it to a string such as: "01110 00100 00100 00100 01110" and simply find it in the hash table. This solution seems very simple, however, this still requires 110 iterations to generate this string for each letter. In big O notation, the algorithm is the same since O(110N) = O(2860N) = O(N) for N letters to process on the page. However, it is still improved by a constant factor of 26, a significant improvement (e.g. instead of it taking 26 minutes, it would take 1 minute). Update: Most of the solutions provided so far have not addressed the issue of identifying the unique characteristics of a character and rather provide alternative solutions. I am still looking for this solution which, as far as I can tell, is the only way to achieve the fastest OCR processing. I just came up with a partial solution: For each pixel, in the grid, store the letters that have it as a black pixel. Using these letters: I A O B L 01110 00100 00100 01100 01000 00100 01010 01010 01010 01000 00100 01110 01010 01100 01000 00100 01010 01010 01010 01000 01110 01010 00100 01100 01110 You would have something like this: CreatePixel(new Point(0, 0), new List<Char>() { }); CreatePixel(new Point(1, 0), new List<Char>() { 'I', 'B', 'L' }); CreatePixel(new Point(2, 0), new List<Char>() { 'I', 'A', 'O', 'B' }); CreatePixel(new Point(3, 0), new List<Char>() { 'I' }); CreatePixel(new Point(4, 0), new List<Char>() { }); CreatePixel(new Point(0, 1), new List<Char>() { }); CreatePixel(new Point(1, 1), new List<Char>() { 'A', 'B', 'L' }); CreatePixel(new Point(2, 1), new List<Char>() { 'I' }); CreatePixel(new Point(3, 1), new List<Char>() { 'A', 'O', 'B' }); // ... CreatePixel(new Point(2, 2), new List<Char>() { 'I', 'A', 'B' }); CreatePixel(new Point(3, 2), new List<Char>() { 'A', 'O' }); // ... CreatePixel(new Point(2, 4), new List<Char>() { 'I', 'O', 'B', 'L' }); CreatePixel(new Point(3, 4), new List<Char>() { 'I', 'A', 'L' }); CreatePixel(new Point(4, 4), new List<Char>() { }); Now for every letter, in order to find the unique characteristics, you need to look at which buckets it belongs to, as well as the amount of other characters in the bucket. So let's take the example of "I". We go to all the buckets it belongs to (1,0; 2,0; 3,0; ...; 3,4) and see that the one with the least amount of other characters is (3,0). In fact, it only has 1 character, meaning it must be an "I" in this case, and we found our unique characteristic. You can also do the same for pixels that would be white. Notice that bucket (2,0) contains all the letters except for "L", this means that it could be used as a white pixel test. Similarly, (2,4) doesn't contain an 'A'. Buckets that either contain all the letters or none of the letters can be discarded immediately, since these pixels can't help define a unique characteristic (e.g. 1,1; 4,0; 0,1; 4,4). It gets trickier when you don't have a 1 pixel test for a letter, for example in the case of 'O' and 'B'. Let's walk through the test for 'O'... It's contained in the following buckets: // Bucket Count Letters // 2,0 4 I, A, O, B // 3,1 3 A, O, B // 3,2 2 A, O // 2,4 4 I, O, B, L Additionally, we also have a few white pixel tests that can help: (I only listed those that are missing at most 2). The Missing Count was calculated as (5 - Bucket.Count). // Bucket Missing Count Missing Letters // 1,0 2 A, O // 1,1 2 I, O // 2,2 2 O, L // 3,4 2 O, B So now we can take the shortest black pixel bucket (3,2) and see that when we test for (3,2) we know it is either an 'A' or an 'O'. So we need an easy way to tell the difference between an 'A' and an 'O'. We could either look for a black pixel bucket that contains 'O' but not 'A' (e.g. 2,4) or a white pixel bucket that contains an 'O' but not an 'A' (e.g. 1,1). Either of these could be used in combination with the (3,2) pixel to uniquely identify the letter 'O' with only 2 tests. This seems like a simple algorithm when there are 5 characters, but how would I do this when there are 26 letters and a lot more pixels overlapping? For example, let's say that after the (3,2) pixel test, it found 10 different characters that contain the pixel (and this was the least from all the buckets). Now I need to find differences from 9 other characters instead of only 1 other character. How would I achieve my goal of getting the least amount of checks as possible, and ensure that I am not running extraneous tests?

    Read the article

  • Delphi fast large bitmap creation (without clearing)

    - by Ritsaert Hornstra
    When using the TBitmap wrapper for a GDI bitmap from the unit Graphics I noticed it will always clear out the bitmap (using a PatBlt call) when setting up a bitmap with SetSize( w, h ). When I copy in the bits later on (see routine below) it seems ScanLine is the fastest possibility and not SetDIBits. function ToBitmap: TBitmap; var i, N, x: Integer; S, D: PAnsiChar; begin Result := TBitmap.Create(); Result.PixelFormat := pf32bit; Result.SetSize( width, height ); S := Src; D := Result.ScanLine[ 0 ]; x := Integer( Result.ScanLine[ 1 ] ) - Integer( D ); N := width * sizeof( longword ); for i := 0 to height - 1 do begin Move( S^, D^, N ); Inc( S, N ); Inc( D, x ); end; end; The bitmaps I need to work with are quite large (150MB of RGB memory). With these iomages it takes 150ms to simply create an empty bitmap and a further 140ms to overwrite it's contents. Is there a way of initializing a TBitmap with the correct size WITHOUT initializing the pixels itself and leaving the memory of the pixels uninitialized (eg dirty)? Or is there another way to do such a thing. I know we could work on the pixels in place but this still leaves the 150ms of unnessesary initializtion of the pixels.

    Read the article

  • Rotate using a transform, then change frame origin, and view expands??

    - by ZaBlanc
    This is quite the iPhone quandry. I am working on a library, but have narrowed down my problem to very simple code. What this code does is create a 50x50 view, applies a rotation transform of a few degrees, then shifts the frame down a few times. The result is the 50x50 view is now much larger looking. Here's the code: // a simple 50x50 view UIView *redThing = [[UIView alloc] initWithFrame:CGRectMake(50, 50, 50, 50)]; redThing.backgroundColor = [UIColor redColor]; [self.view addSubview:redThing]; // rotate a small amount (as long as it's not 90 or 180, etc.) redThing.transform = CGAffineTransformRotate(redThing.transform, 0.1234); // move the view down 2 pixels CGRect newFrame = CGRectMake(redThing.frame.origin.x, redThing.frame.origin.y + 2, redThing.frame.size.width, redThing.frame.size.height); redThing.frame = newFrame; // move the view down another 2 pixels newFrame = CGRectMake(redThing.frame.origin.x, redThing.frame.origin.y + 2, redThing.frame.size.width, redThing.frame.size.height); redThing.frame = newFrame; // move the view down another 2 pixels newFrame = CGRectMake(redThing.frame.origin.x, redThing.frame.origin.y + 2, redThing.frame.size.width, redThing.frame.size.height); redThing.frame = newFrame; // move the view down another 2 pixels newFrame = CGRectMake(redThing.frame.origin.x, redThing.frame.origin.y + 2, redThing.frame.size.width, redThing.frame.size.height); redThing.frame = newFrame; So, what the heck is going on? Now, if I move the view by applying a translation transform, it works just fine. But that's not what I want to do and this should work anyway. Any ideas?

    Read the article

  • iPhone: How to Determine Average Light/Dark of an Area of an UIImage

    - by TechZen
    I need to place labels with a transparent background over a variable-content UIImage. Readability will vary significantly depending on the relationship between the color of the label's text and the color/luminosity of the area of the image displayed under the label. Since the image will be constantly changing, the color of the label's text needs to change in sync. I have found several techniques for determining the color, perceived luminosity etc of a single pixel. However, I need to rather quickly (while a view loads) determine the rough perceived color/luminosity of an area of the UIImage under the frame of the UILabel. I presume I will also need to measure the alpha because the same color/luminosity looks different at different alpha values. Is there a way to calculate such a value for an area? Will I be reduced to simply summing pixels? If it comes to that, is there an algorithm to accomplish this? I've thought of two possible approaches: Perform some "folding" operations i.e. combining pixels from one half of the area to the other half. Then repeat until I get a single value. Would this be practical? How would you logically combine pixels to average their perceived color/luminosity? Sample a statistically significant number of pixels in the area and then combine them (somehow) to get a rough measure. I think this problem comes up a lot these days with people being so found of customizing backgrounds. Seems like something that would be worth my time to bang out a category or class to handle this and then share it around.

    Read the article

  • problem with a very simple tile based game

    - by newbieguy
    Hello, I am trying to create a pacman-like game. I have an array that looks like this: array: 1111111111111 1000000000001 1111110111111 1000000000001 1111111111111 1 = Wall, 0 = Empty space I use this array to draw tiles that are 16x16 in size. The Game character is 32x32. Initially I represented the character's position in array indexes, [1,1] etc. I would update his position if array[character.new_y][charater.new_x] == 0 Then I translated these array coordinates to pixels, [y*16, x*16] to draw him. He was lining up nicely, wouldn't go into walls, but I noticed that since I was updating him by 16 pixels each, he was moving very fast. I decided to do it in reverse, to store the game character's position in pixels instead, so that he could use less than 16 pixels per move. I thought that a simple if statement such as this: if array[(character.new_pixel_y)/16][(character.new_pixel_x)/16] == 0 would prevent him from going into walls, but unfortunately he eats a bit of the bottom and right side walls. Any ideas how would I properly translate pixel position to the array indexes? I guess this is something simple, but I really can't figure it out :(

    Read the article

  • Why can't IE6 shows semi transparent png8 files with alpha filter ?

    - by vvo
    -- read the whole question before answering -- Hi, i work on a big website that had a lot (45000+) of png24 images (with semi transparency). I converted them to png8 and it works very well (a big help on page load time...). The thing is i had to keep png24 files for ie6 users (with alpha filter to have semi transparent pixels) because we all know that we can't use png 8 semi transparent images in IE6 : either the semi transparent pixels will be opaque or completely transparent. I tried to use the alpha image loader filter with png8 images but it just don't work, the pixels are still opaque/completely transparent, no semi transparency. What's the reason it's not working ? Is there a difference for IE when dealing with semi transparent pixels from a png24 or from a png8 ? I couldn't find any information on msdn websites or on stackoverflow... This is crazy... ! DISCLAIMER : i'm not searching for a f**ckin fix IE6 png or sh*t like that, i already know alpha image loader or htc techniques etc, theses all works well with PNG24 files but doesn't work with png8 files.

    Read the article

  • Is there a quality, file-size, or other benefit to JPEG sizes being multiples of 8px or 16px?

    - by davebug
    The JPEG compression encoding process splits a given image into blocks of 8x8 pixels, working with these blocks in future lossy and lossless compressions. [source] It is also mentioned that if the image is a multiple 1MCU block (defined as a Minimum Coded Unit, 'usually 16 pixels in both directions') that lossless alterations to a JPEG can be performed. [source] I am working with product images and would like to know both if, and how much benefit can be derived from using multiples of 16 in my final image size (say, using an image with size 480px by 360px) vs. a non-multiple of 16 (such as 484x362). In this example I am not interested in further alterations, editing, or recompression of the final image. To try to get closer to a specific answer where I know there must be largely generalities: Given a 480x360 image that is 64k and saved at maximum quality in Photoshop [example]: Can I expect any quality loss from an image that is 484x362 What amount of file size addition can I expect (for this example, the additional space would be white pixels) Are there any other disadvantages to growing larger than the 8px grid? I know it's arbitrary to use that specific example, but it would still be helpful (for me and potentially any others pondering an image size) to understand what level of compromise I'd be dealing with in breaking the non-8px grid. The key issue here is a debate I've had is whether 8-pixel divisible images are higher quality than images that are not divisible by 8-pixels.

    Read the article

  • Is there a way to capture a bitmap from a WPF window using native C++?

    - by Mike Caron
    Imagine a document window in a MDI application which contains a child WPF window, say a sidebar for example. How can one get a bitmap containing both the WPF pixels AND the GDI (non-wpf) pixels? I've discovered that when making my thumbnail preview for the Win7 taskbar app icon hover, I get black in the parts of the preview where the WPF pixels should be. My current method simply grabs a bitmap capture of the document window. Then I get a DC for the preview, make a memory DC from it and select my bitmap into it. Then I do some size adjustments and bitblt the memory dc to the real dc. I'm guessing that the BitBlt operation doesn't take into account the fact that the WPF pixels are hardware accelerated and therefore need to be grabbed from the graphics hardware. All the stuff in GDI is managed just fine, though and when there's no WPF child windows, the preview image looks fine. I'm wondering if it's at all possible to grab a bitmap of the WPF window from native C++. Then I can blt that onto the black area of the previous preview.

    Read the article

  • Mobile Friendly Websites with CSS Media Queries

    - by dwahlin
    In a previous post the concept of CSS media queries was introduced and I discussed the fundamentals of how they can be used to target different screen sizes. I showed how they could be used to convert a 3-column wide page into a more vertical view of data that displays better on devices such as an iPhone:     In this post I'll provide an additional look at how CSS media queries can be used to mobile-enable a sample site called "Widget Masters" without having to change any server-side code or HTML code. The site that will be discussed is shown next:     This site has some of the standard items shown in most websites today including a title area, menu bar, and sections where data is displayed. Without including CSS media queries the site is readable but has to be zoomed out to see everything on a mobile device, cuts-off some of the menu items, and requires horizontal scrolling to get to additional content. The following image shows what the site looks like on an iPhone. While the site works on mobile devices it's definitely not optimized for mobile.     Let's take a look at how CSS media queries can be used to override existing styles in the site based on different screen widths. Adding CSS Media Queries into a Site The Widget Masters Website relies on standard CSS combined with HTML5 elements to provide the layout shown earlier. For example, to layout the menu bar shown at the top of the page the nav element is used as shown next. A standard div element could certainly be used as well if desired.   <nav> <ul class="clearfix"> <li><a href="#home">Home</a></li> <li><a href="#products">Products</a></li> <li><a href="#aboutus">About Us</a></li> <li><a href="#contactus">Contact Us</a></li> <li><a href="#store">Store</a></li> </ul> </nav>   This HTML is combined with the CSS shown next to add a CSS3 gradient, handle the horizontal orientation, and add some general hover effects.   nav { width: 100%; } nav ul { border-radius: 6px; height: 40px; width: 100%; margin: 0; padding: 0; background: rgb(125,126,125); /* Old browsers */ background: -moz-linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* FF3.6+ */ background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,rgba(125,126,125,1)), color-stop(100%,rgba(14,14,14,1))); /* Chrome,Safari4+ */ background: -webkit-linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* Chrome10+,Safari5.1+ */ background: -o-linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* Opera 11.10+ */ background: -ms-linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* IE10+ */ background: linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* W3C */ filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#7d7e7d', endColorstr='#0e0e0e',GradientType=0 ); /* IE6-9 */ } nav ul > li { list-style: none; float: left; margin: 0; padding: 0; } nav ul > li:first-child { margin-left: 8px; } nav ul > li > a { color: #ccc; text-decoration: none; line-height: 2.8em; font-size: 0.95em; font-weight: bold; padding: 8px 25px 7px 25px; font-family: Arial, Helvetica, sans-serif; } nav ul > li a:hover { background-color: rgba(0, 0, 0, 0.1); color: #fff; }   When mobile devices hit the site the layout of the menu items needs to be adjusted so that they're all visible without having to swipe left or right to get to them. This type of modification can be accomplished using CSS media queries by targeting specific screen sizes. To start, a media query can be added into the site's CSS file as shown next: @media screen and (max-width:320px) { /* CSS style overrides for this screen width go here */ } This media query targets screens that have a maximum width of 320 pixels. Additional types of queries can also be added – refer to my previous post for more details as well as resources that can be used to test media queries in different devices. In that post I emphasize (and I'll emphasize again) that CSS media queries only modify the overall layout and look and feel of a site. They don't optimize the site as far as the size of the images or content sent to the device which is important to keep in mind. To make the navigation menu more accessible on devices such as an iPhone or Android the CSS shown next can be used. This code changes the height of the menu from 40 pixels to 100%, takes off the li element floats, changes the line-height, and changes the margins.   @media screen and (max-width:320px) { nav ul { height: 100%; } nav ul > li { float: none; } nav ul > li a { line-height: 1.5em; } nav ul > li:first-child { margin-left: 0px; } /* Additional CSS overrides go here */ }   The following image shows an example of what the menu look like when run on a device with a width of 320 pixels:   Mobile devices with a maximum width of 480 pixels need different CSS styles applied since they have 160 additional pixels of width. This can be done by adding a new CSS media query into the stylesheet as shown next. Looking through the CSS you'll see that only a minimal override is added to adjust the padding of anchor tags since the menu fits by default in this screen width.   @media screen and (max-width: 480px) { nav ul > li > a { padding: 8px 10px 7px 10px; } }   Running the site on a device with 480 pixels results in the menu shown next being rendered. Notice that the space between the menu items is much smaller compared to what was shown when the main site loads in a standard browser.     In addition to modifying the menu, the 3 horizontal content sections shown earlier can be changed from a horizontal layout to a vertical layout so that they look good on a variety of smaller mobile devices and are easier to navigate by end users. The HTML5 article and section elements are used as containers for the 3 sections in the site as shown next:   <article class="clearfix"> <section id="info"> <header>Why Choose Us?</header> <br /> <img id="mainImage" src="Images/ArticleImage.png" title="Article Image" /> <p> Post emensos insuperabilis expeditionis eventus languentibus partium animis, quas periculorum varietas fregerat et laborum, nondum tubarum cessante clangore vel milite locato per stationes hibernas. </p> </section> <section id="products"> <header>Products</header> <br /> <img id="gearsImage" src="Images/Gears.png" title="Article Image" /> <p> <ul> <li>Widget 1</li> <li>Widget 2</li> <li>Widget 3</li> <li>Widget 4</li> <li>Widget 5</li> </ul> </p> </section> <section id="FAQ"> <header>FAQ</header> <br /> <img id="faqImage" src="Images/faq.png" title="Article Image" /> <p> <ul> <li>FAQ 1</li> <li>FAQ 2</li> <li>FAQ 3</li> <li>FAQ 4</li> <li>FAQ 5</li> </ul> </p> </section> </article>   To force the sections into a vertical layout for smaller mobile devices the CSS styles shown next can be added into the media queries targeting 320 pixel and 480 pixel widths. Styles to target the display size of the images in each section are also included. It's important to note that the original image is still being downloaded from the server and isn't being optimized in any way for the mobile device. It's certainly possible for the CSS to include URL information for a mobile-optimized image if desired. @media screen and (max-width:320px) { section { float: none; width: 97%; margin: 0px; padding: 5px; } #wrapper { padding: 5px; width: 96%; } #mainImage, #gearsImage, #faqImage { width: 100%; height: 100px; } } @media screen and (max-width: 480px) { section { float: none; width: 98%; margin: 0px 0px 10px 0px; padding: 5px; } article > section:last-child { margin-right: 0px; float: none; } #bottomSection { width: 99%; } #wrapper { padding: 5px; width: 96%; } #mainImage, #gearsImage, #faqImage { width: 100%; height: 100px; } }   The following images show the site rendered on an iPhone with the CSS media queries in place. Each of the sections now displays vertically making it much easier for the user to access them. Images inside of each section also scale appropriately to fit properly.     CSS media queries provide a great way to override default styles in a website and target devices with different resolutions. In this post you've seen how CSS media queries can be used to convert a standard browser-based site into a site that is more accessible to mobile users. Although much more can be done to optimize sites for mobile, CSS media queries provide a nice starting point if you don't have the time or resources to create mobile-specific versions of sites.

    Read the article

  • Tips for XNA WP7 Developers

    - by Michael B. McLaughlin
    There are several things any XNA developer should know/consider when coming to the Windows Phone 7 platform. This post assumes you are familiar with the XNA Framework and with the changes between XNA 3.1 and XNA 4.0. It’s not exhaustive; it’s simply a list of things I’ve gathered over time. I may come back and add to it over time, and I’m happy to add anything anyone else has experienced or learned as well. Display · The screen is either 800x480 or 480x800. · But you aren’t required to use only those resolutions. · The hardware scaler on the phone will scale up from 240x240. · One dimension will be capped at 800 and the other at 480; which depends on your code, but you cannot have, e.g., an 800x600 back buffer – that will be created as 800x480. · The hardware scaler will not normally change aspect ratio, though, so no unintended stretching. · Any dimension (width, height, or both) below 240 will be adjusted to 240 (without any aspect ratio adjustment such that, e.g. 200x240 will be treated as 240x240). · Dimensions below 240 will be honored in terms of calculating whether to use portrait or landscape. · If dimensions are exactly equal or if height is greater than width then game will be in portrait. · If width is greater than height, the game will be in landscape. · Landscape games will automatically flip if the user turns the phone 180°; no code required. · Default landscape is top = left. In other words a user holding a phone who starts a landscape game will see the first image presented so that the “top” of the screen is along the right edge of his/her phone, such that the natural behavior would be to turn the phone 90° so that the top of the phone will be held in the user’s left hand and the bottom would be held in the user’s right hand. · The status bar (where the clock, battery power, etc., are found) is hidden when the Game-derived class sets GraphicsDeviceManager.IsFullScreen = true. It is shown when IsFullScreen = false. The default value is false (i.e. the status bar is shown). · You should have a good reason for hiding the status bar. Users find it helpful to know what time it is, how much charge their battery has left, and whether or not their phone is in service range. This is especially true for casual games that you expect someone to play for a few minutes at a time, e.g. while waiting for some event to start, for a phone call to come in, or for a train, bus, or subway to arrive. · In portrait mode, the status bar occupies 32 pixels of space. This means that a game with a back buffer of 480x800 will be scaled down to occupy approximately 461x768 screen pixels. Setting the back buffer to 480x768 (or some resolution with the same 0.625 aspect ratio) will avoid this scaling. · In landscape mode, the status bar occupies 72 pixels of space. This means that a game with a back buffer of 800x480 will be scaled down to occupy approximately 728x437 screen pixels. Setting the back buffer to 728x480 (or some resolution with the same 1.51666667 aspect ratio) will avoid this scaling. Input · Touch input is scaled with screen size. · So if your back buffer is 600x360, a tap in the bottom right corner will come in as (599,359). You don’t need to do anything special to get this automatic scaling of touch behavior. · If you do not use full area of the screen, any touch input outside the area you use will still register as a touch input. For example, if you set a portrait resolution of 240x240, it would be scaled up to occupy a 480x480 area, centered in the screen. If you touch anywhere above this area, you will get a touch input of (X,0) where X is a number from 0 to 239 (in accordance with your 240 pixel wide back buffer). Any touch below this area will give a touch input of (X,239). · If you keep the status bar visible, touches within its area will not be passed to your game. · In general, a screen measurement is the diagonal. So a 3.5” screen is 3.5” long from the bottom right corner to the top left corner. With an aspect ratio of 0.6 (480/800 = 0.6), this means that a phone with a 3.5” screen is only approximately 1.8” wide by 3” tall. So there are approximately 267 pixels in an inch on a 3.5” screen. · Again, this time in metric! 3.5 inches is approximately 8.89 cm. So an 8.89 cm screen is 8.89 cm long from the bottom right corner to the top left corner. With an aspect ratio of 0.6, this means that a phone with an 8.89 cm screen is only approximately 4.57 cm wide by 7.62 cm tall. So there are approximately 105 pixels in a centimeter on an 8.89 cm screen. · Think about the size of your finger tip. If you do not have large hands, think about the size of the fingertip of someone with large hands. Consider that when you are sizing your touch input. Especially consider that when you are spacing two touch targets near one another. You need to judge it for yourself, but items that are next to each other and are each 100x100 should be fine when it comes to selecting items individually. Smaller targets than that are ok provided that you leave space between them. · You want your users to have a pleasant experience. Making touch controls too small or too close to one another will make them nervous about whether they will touch the right target. Take this into account when you plan out your game initially. If possible, do some quick size mockups on an actual phone using colored rectangles that you position and size where you plan to have your game controls. Adjust as necessary. · People do not have transparent hands! Nor are their hands the size of a mouse pointer icon. Consider leaving a dedicated space for input rather than forcing the user to cover up to one-third of the screen with a finger just to play the game. · Another benefit of designing your controls to use a dedicated area is that you’re less likely to have players moving their finger(s) so frantically that they accidentally hit the back button, start button, or search button (many phones have one or more of these on the screen itself – it’s easy to hit one by accident and really annoying if you hit, e.g., the search button and then quickly tap back only to find out that the game didn’t save your progress such that you just wasted all the time you spent playing). · People do not like doing somersaults in order to move something forward with accelerometer-based controls. Test your accelerometer-based controls extensively and get a lot of feedback. Very well-known games from noted publishers have created really bad accelerometer controls and been virtually unplayable as a result. Also be wary of exceptions and other possible failures that the documentation warns about. · When done properly, the accelerometer can add a nice touch to your game (see, e.g. ilomilo where the accelerometer was used to move the background; it added a nice touch without frustrating the user; I also think CarniVale does direct accelerometer controls very well). However, if done poorly, it will make your game an abomination unto the Marketplace. Days, weeks, perhaps even months of development time that you will never get back. I won’t name names; you can search the marketplace for games with terrible reviews and you’ll find them. Graphics · The maximum frame rate is 30 frames per second. This was set as a compromise between battery life and quality. · At least one model of phone is known to have a screen refresh rate that is between 59 and 60 hertz. Because of this, using a fixed time step with a target frame rate of 30 will cause a slight internal delay to build up as the framework is forced to wait slightly for the next refresh. Eventually the delay will get to the point where a draw is skipped in order to recover from the delay. (See Nick's comment below for clarification.) · To deal with that delay, you can either stay with a fixed time step and set the frame rate slightly lower or else you can go to a variable time step and make sure to adjust all of your update data (e.g. player movement distance) to take into account the elapsed time from the last update. A variable time step makes your update logic slightly more complicated but will avoid frame skips entirely. · Currently there are no custom shaders. This might change in the future (there is no hardware limitation preventing it; it simply wasn’t a feature that could be implemented in the time available before launch). · There are five built-in shaders. You can create a lot of nice effects with the built-in shaders. · There is more power on the CPU than there is on the GPU so things you might typically off-load to the GPU will instead make sense to do on the CPU side. · This is a phone. It is not a PC. It is not an Xbox 360. The emulator runs on a PC and uses the full power of your PC. It is very good for testing your code for bugs and doing early prototyping and layout. You should not use it to measure performance. Use actual phone hardware instead. · There are many phone models, each of which has slightly different performance levels for I/O, screen blitting, CPU performance, etc. Do not take your game right to the performance limit on your phone since for some other phones you might be crossing their limits and leaving players with a bad experience. Leave a cushion to account for hardware differences. · Smaller screened phones will have slightly more dots per inch (dpi). Larger screened phones will have slightly less. Either way, the dpi will be much higher than the typical 96 found on most computer screens. Make sure that whoever is doing art for your game takes this into account. · Screens are only required to have 16 bit color (65,536 colors). This is common among smart phones. Using gradients on a 16 bit display can produce an ugly artifact known as banding. Banding is when, rather than a smooth transition from one color to another, you instead see distinct lines. Be careful to avoid this when possible. Banding can be avoided through careful art creation. Its effects can be minimized and even unnoticeable when the texture in question is always moving. You should be careful not to rely on “looks good on my phone” since some phones do have 32-bit displays and thus you’ll find yourself wondering why you’re getting bad reviews that complain about the graphics. Avoid gradients; if you can’t, make sure they are 16-bit safe. Audio · Never rely on sounds as your sole signal to the player that something is happening in the game. They might have the sound off. They might be playing somewhere loud. Etc. · You have to provide controls to disable sound & music. These should be separate. · On at least one model of phone, the volume control API currently has no effect. Players can adjust sound with their hardware volume buttons, but in game selectors simply won’t work. As such, it may not be worth the effort of providing anything beyond on/off switches for sound and music. · MediaPlayer.GameHasControl will return true when a game is hooked up to a PC running Zune. When Zune is running, any attempts to do anything (beyond check GameHasControl) with MediaPlayer will cause an exception to be thrown. If this exception is thrown, catch it and disable music. Exceptions take time to propagate; you don’t want one popping up in every single run of your game’s Update method. · Remember that players can already be listening to music or using the FM radio. In this case GameHasControl will be false and you should handle this appropriately. You can, alternately, ask the player for permission to stop their current music and play your music instead, but the (current) requirement that you restore their music when done is very hard (if not impossible) to deal with. · You can still play sound effects even when the game doesn’t have control of the music, but don’t think this is a backdoor to playing music. Your game will fail certification if your “sound effect” seems to be more like music in scope and length.

    Read the article

  • Low framerate on background apps

    - by user1698923
    My problem is that when a game is running in the foreground, in Full Screen mode, any applications on my second monitor (such as youtube videos, videos, not app specific) drop their frame-rate to about 2-3 FPS. It seems like some sort of power management option that I can't track down. As far as I can tell, it's not due to the GPU not being able to keep up. For instance, my PC can play League of Legends at about 280FPS when the framerate is uncapped. If i cap it at 60FPS using the in-game option, it has no affect on the performance of the background app. Summary Operating System Windows 8 Pro 64-bit CPU Intel Core i7 3820 @ 3.60GHz 42 °C Sandy Bridge-E 32nm Technology RAM 12.0GB Triple-Channel DDR3 @ 533MHz (7-7-7-20) Motherboard Gigabyte Technology Co., Ltd. X79-UD3 (SOCKET 0) 37 °C Graphics DELL U2713HM (2560x1440@59Hz) DELL U2713HM (2560x1440@59Hz) 1280MB NVIDIA GeForce GTX 570 (Gigabyte) 58 °C Hard Drives 212GB Volume0 (RAID) 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 36 °C 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 34 °C Optical Drives No optical disk drives detected Audio ASUS Xonar Essence STX Audio Device Operating System Windows 8 Pro 64-bit Computer type: Desktop Graphics Monitor 1 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Secondary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY4\Monitor0 Monitor 2 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Primary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY5\Monitor0 NVIDIA GeForce GTX 570 Manufacturer NVIDIA Model GeForce GTX 570 GPU GF110 Device ID 10DE-1086 Revision A2 Subvendor Gigabyte (1458) Series GeForce GTX 500 Current Performance Level Level 3 Current GPU Clock 845 MHz Current Memory Clock 1900 MHz Current Shader Clock 1690 MHz Voltage 0.988 V Technology 40 nm Die Size 520 mm² Release Date Dec 07, 2010 DirectX Support 11.0 OpenGL Support 5.0 Bus Interface PCI Express x16 Temperature 57 °C Driver version 9.18.13.2018 BIOS Version 70.10.55.00.01 ROPs 40 Shaders 512 unified Memory Type GDDR5 Memory 1280 MB Bus Width 64x5 (320 bit) Filtering Modes 16x Anisotropic Noise Level Moderate Max Power Draw 219 Watts Count of performance levels : 3 Level 1 - "Default" GPU Clock 50 MHz Memory Clock 135 MHz Shader Clock 101 MHz Level 2 - "2D Desktop" GPU Clock 405 MHz Memory Clock 324 MHz Shader Clock 810 MHz Level 3 - "3D Applications" GPU Clock 845 MHz Memory Clock 1900 MHz Shader Clock 1690 MHz Things I've tried: 1) Updating the graphics driver 2) Setting windows power mode to High Performance 3) Reset Nvidia Global Performance settings to default

    Read the article

  • Convert TIFF with Alpha Channel to PNG?

    - by Michael Stum
    I have a TIFF File that has a blue background and an Alpha Channel. I would like to save a PNG File which has the blue background transparent, using the Alpha Channel (because some pixels are not fully transparent) I use Photoshop CS5 Standard, but I haven't found an option there. I don't want to use the Magic Wand because it struggles with half-transparent pixels and because I do have a perfect Alpha Channel. Any ideas how this can be done?

    Read the article

  • Convert PowerPoint to Flash or Silverlight?

    - by Michael Stum
    I have a simple PowerPoint presentation and I would like to convert it to Flash or Silverlight. The presentation is a simple "Slide after Slide after Slide" and my first guess was to use OpenOffice Impress. Sadly, the picture quality is awful. I would need the presentation to be in a specific format (900 Pixels wide and as high as it needs to be, usually 675 pixels) Can you recommend any good, simple PowerPoint = Flash or Silverlight converter that does that?

    Read the article

  • How to specify behavior of Java BufferedImage resize: need min for pixel rows instead of averaging

    - by tucuxi
    I would like to resize a Java BufferedImage, making it smaller vertically but without using any type of averaging, so that if a pixel-row is "blank" (white) in the source image, there will be a white pixel-row in the corresponding position of the destination image: the "min" operation. The default algorithms (specified in getScaledInstance) do not allow me a fine-grained enough control. I would like to implement the following logic: for each pixel row in the w-pixels wide destination image, d = pixel[w] find the corresponding j pixel rows of the source image, s[][] = pixel[j][w] write the new line of pixels, so that d[i] = min(s[j][i]) over all j, i I have been reading on RescaleOp, but have not figured out how to implement this functionality -- it is admittedly a weird type of scaling. Can anyone provide me pointers on how to do this? In the worse case, I figure I can just reserve the destination ImageBuffer and copy the pixels following the pseudocode, but I was wondering if there is better way.

    Read the article

  • Fastest Algorithm to scale down 32Bit RGB IMAGE.

    - by Sunny
    which algorithm to use to scale down 32Bit RGB IMAGE to custom resolution? Algorithm should average pixels. for example If I have 100x100 image and I want new Image of size 20x50. Avg of first five pixels of first source row will give first pixel of dest, And avg of first two pixels of first source column will give first dest column pixel. Currently what I do is first scale down in X resolution, and after that I scale down in Y resolution. I need one temp buffer in this method. Is there any optimized method that you know?

    Read the article

  • On-the-fly lossless image compression

    - by geschema
    I have an embedded application where an image scanner sends out a stream of 16-bit pixels that are later assembled to a grayscale image. As I need to both save this data locally and forward it to a network interface, I'd like to compress the data stream to reduce the required storage space and network bandwidth. Is there a simple algorithm that I can use to losslessly compress the pixel data? I first thought of computing the difference between two consecutive pixels and then encoding this difference with a Huffman code. Unfortunately, the pixels are unsigned 16-bit quantities so the difference can be anywhere in the range -65535 .. +65535 which leads to potentially huge codeword lengths. If a few really long codewords occur in a row, I'll run into buffer overflow problems.

    Read the article

  • Image 8-connectivity without excessive branching?

    - by shoosh
    I'm writing a low level image processing algorithm which needs to do alot of 8-connectivity checks for pixels. For every pixel I often need to check the pixels above it, below it and on its sides and diagonals. On the edges of the image there are special cases where there are only 5 or 3 neighbors instead of 8 neighbors for a pixels. The naive way to do it is for every access to check if the coordinates are in the right range and if not, return some default value. I'm looking for a way to avoid all these checks since they introduce a large overhead to the algorithm. Are there any tricks to avoid it altogether?

    Read the article

  • How do laziness and I/O work together in Haskell?

    - by Bill
    I'm trying to get a deeper understanding of laziness in Haskell. I was imagining the following snippet today: data Image = Image { name :: String, pixels :: String } image :: String -> IO Image image path = Image path <$> readFile path The appeal here is that I could simply create an Image instance and pass it around; if I need the image data it would be read lazily - if not, the time and memory cost of reading the file would be avoided: main = do image <- image "file" putStrLn $ length $ pixels image But is that how it actually works? How is laziness compatible with IO? Will readFile be called regardless of whether I access pixels image or will the runtime leave that thunk unevaluated if I never refer to it? If the image is indeed read lazily, then isn't it possible I/O actions could occur out of order? For example, what if immediately after calling image I delete the file? Now the putStrLn call will find nothing when it tries to read.

    Read the article

  • Find longest initial substring which renders in a limited width

    - by Lu Lu
    Hello everyone, I have a long string, ex: "Please help me to solve this problem." This string is so long to fit in a width of 100 pixels. I need to get a substring of this string and substring will fit in 100 pixels. Ex: substring "Please help me to sol" is fit in 100 pixels. Please help me how to estimate a substring like this. Thanks. My application is Win Forms and C#.

    Read the article

  • Get a substring of a long string which fits in a width

    - by Lu Lu
    Hello everyone, I have a long string, ex: "Please help me to solve this problem." This string is so long to fit in a width of 100 pixels. I need to get a substring of this string and substring will fit in 100 pixels. Ex: substring "Please help me to sol" is fit in 100 pixels. Please help me how to estimate a substring like this. Thanks. My application is Win Forms and C#.

    Read the article

  • Storyboard editor layout confusion

    - by drew
    I am having layout problems with the storyboard editor with a fairly simple screen. I have a UIViewController to which I have added a 320x440 UIScrollView at 0,0 followed by a 320x20 UIProgressBar at 0,440. It looks fine in Storyboard editor. I'm not entirely sure how the 20 pixel status bar at the top of the screen is accommodated given the CGRect frame coordinates that Storyboard calculates. On loading ( in -(void)viewDidLoad ), the UIScrollView frame seems to be set to 320x460 pixels at 0,0 but the UIProgressBar is still 320x20 at 0,440. When I add subviews to the UIScrollView, (UIImageViews in particular), they get stretched and get clipped on the screen because although the UIScrollView thinks it is 460 pixels high, it only has 440 pixels of screen to display in. Can anyone point me to a solution? Thanks

    Read the article

  • Size and position of the C# form.

    - by Vytas999
    I use Visual Studio .NET to create a Windows-based application. The application includes a form named CForm. CForm contains 15 controls that enable users to set basic configuration options for the application. I design these controls to dynamically adjust when users resize CForm. The controls automatically update their size and position on the form as the form is resized. The initial size of the form should be 650 x 700 pixels. If CForm is resized to be smaller then 500 x 600 pixels, the controls will not be displayed correctly. How I must ensure that users cannot resize CForm to be smaller than 500 x 600 pixels..?

    Read the article

  • extracting a quadrilateral image to a rectangle

    - by Will
    In the image below, the sign on the side of the van is not face-on to the camera. I want to calculate, as best I can with the pixels I have, what it'd look like face on. I imagine that this is some kind of loop through the x and y axis doing a Bresenham's line on both dimensions at once with some kind of mixing when pixels in the source image overlap - some sub-pixel mixing of some sort? What approaches are there, and how do you mix the pixels? Is there a standard approach for this?

    Read the article

  • How to detect an 'image area' percentage inside an image?

    - by DaNieL
    Mhh, kinda hard to explain with my poor english ;) So, lets say I have an image, doesnt matter what kind of (gif, jpg, png) with 200x200 pixel size (total area 40000 pixels) This image have a background, that can be trasparent, or every color (but i know the background-color in advance). Lets say that in the middle of this image, there is a picture (for keep the example simple lets suppose is a square drawn), of 100x100 pixels (total area 10000 pixels). I need to know the area percentage that the small square fill inside the image. So, in i know the full image size and the background-color, there is a way in php/python to scan the image and retrieve that (in short, counting the pixel that are different from the given background)? In the above example, the result should be 25%

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >