Search Results

Search found 2086 results on 84 pages for 'pixel shader'.

Page 38/84 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Cross browser (chrome/firefox) trying to get top-pos defined in percentage as pixels

    - by Cinaird
    I have a problem whit cross browser output, I'm trying to get the top and left css attribute of a div, but firefox gives me the exact pixel position and Chrome give me the percentage. Example: http://web.cinaird.se/pdf/test.htm CSS #mix{ position:absolute; top: 10px; left: 45%; background-color:#f0f; } jQuery css top: " + $("#mix").css("top") + " <br/>css left: " + $("#mix").css("left") Output Firefox (and IE8): css top: 10px css left: 267.3px Chrome: css top: 10px css left: 45% is there any way to get the same result for both (all) browsers? I would prefer to get a pixel value without any major calculation

    Read the article

  • Qt4.6: QTextDocument <HR> tag prints only very thin, almost invisible hair lines

    - by hurikhan77
    When printing a QTextDocument with doc->print() I almost cannot see the horizontal rules inserted by <hr>. When printing to PDF these are clearly visible. But when printed to a printer these lines are very very thin lines, almost invisible on the paper. How do I fix this? I currently helped myself by inserting an <img> with a black pixel but this is very cumbersome as I have to exactly figure out the proper pixel width by trial and error.

    Read the article

  • C# Hotkey, Help?

    - by Di4g0n4leye
    namespace WebBrowser { public partial class MainForm : Form { public MainForm() { InitializeComponent(); } int GetPixel(int x, int y) { Bitmap bmp = new Bitmap(1, 1, PixelFormat.Format32bppPArgb); Graphics grp = Graphics.FromImage(bmp); grp.CopyFromScreen(new Point(x,y), Point.Empty, new Size(1,1)); grp.Save(); return bmp.GetPixel(0, 0).ToArgb(); } void Button1Click(object sender, EventArgs e) { int x = Cursor.Position.X; int y = Cursor.Position.Y; int pixel = GetPixel(x,y); textBox1.Text = pixel.ToString(); } void MainFormLoad(object sender, EventArgs e) { webBrowser1.Navigate("http://google.com"); } } } } How i want to add a hotkey that call Button1 on Press? How can that be done?

    Read the article

  • Maximum File Size and Pixels for Uploaded Business Documents

    - by webdevguy
    I am creating a php form that accepts an upload of business documents in a variety of formats .pdf, .doc, .tiff, .jpeg, etc. and I need to restrict the size of the files that are uploaded. It's trivial for me to restrict the file size, but I'm not sure if I should also restrict the max height/width, which are also available options. I will need to occasionally print these documents to 8.5 X 11inch paper and have them be legible, but don't really care if images come out. Should I restrict the pixels sizes or is that redundant with restricting the file size? If so, do you guys have a recommendations for max height/width for, say, a 5MB file size limit? I really have no idea what the relationship between pixel size and image size is or what the common pixel sizes are for scanned images. Also, what would be a reasonable size expectation for a legible print per page?

    Read the article

  • question about working with System.Drawing.Graphics

    - by backdoor
    hi all. i have a System.Drawing.Point[] filled with some System.Drawing.Point's. so when i want to draw this points as a polygon in a System.Windows.Form instance , the final drawn polygon is not all in the screen or sometimes is very small (in screen shown as 2-3 pixel). i wonder if there is some Library that using that i can just send Point[] to that and thatself scales and ... points and draws polygon manner that all points shown in screen and they are scaled to fit the screen (i mean small objects that shown as 2-3 pixel scale up to fit entire screen); thaks all and sorry for my bad english...

    Read the article

  • Blending pixels from Two Bitmaps

    - by MarkPowell
    I'm beating my head against a wall here, and I'm fairly certain I'm doing something stupid, so time to make my stupidity public. I'm trying to take two images, blend them together into a third image using standard blending algorithms (Hardlight, softlight, overlay, multiply, etc). Because Android does not have such blend properties build in, I've gone down the path of taking each pixel and combine them using an algorithm. However, the results are garbage. Any help would be appreciated. Below is the code, which I've tried to strip out all the "junk", but some may have made it through. I'll clean it up if something isn't clear. Bitmap src = BitmapFactory.decodeResource(getResources(), R.drawable.base, options); Bitmap mutableBitmap = src.copy(Bitmap.Config.RGB_565, true); int imageId = getResources().getIdentifier("drawable/" + filter, null, getPackageName()); Bitmap filterBitmap = BitmapFactory.decodeResource(getResources(), imageId, options); float scaleWidth = ((float) mutableBitmap.getWidth()) / filterBitmap.getWidth(); float scaleHeight = ((float) mutableBitmap.getHeight()) / filterBitmap.getHeight(); IntBuffer buffSrc = IntBuffer.allocate(src.getWidth() * src.getHeight()); mutableBitmap.copyPixelsToBuffer(buffSrc); buffSrc.rewind(); IntBuffer buffFilter = IntBuffer.allocate(resizedFilterBitmap.getWidth() * resizedFilterBitmap.getHeight()); resizedFilterBitmap.copyPixelsToBuffer(buffFilter); buffFilter.rewind(); IntBuffer buffOut = IntBuffer.allocate(src.getWidth() * src.getHeight()); buffOut.rewind(); while (buffOut.position() < buffOut.limit()) { int filterInt = buffFilter.get(); int srcInt = buffSrc.get(); int alphaValueFilter = Color.alpha(filterInt); int redValueFilter = Color.red(filterInt); int greenValueFilter = Color.green(filterInt); int blueValueFilter = Color.blue(filterInt); int alphaValueSrc = Color.alpha(srcInt); int redValueSrc = Color.red(srcInt); int greenValueSrc = Color.green(srcInt); int blueValueSrc = Color.blue(srcInt); int alphaValueFinal = convert(alphaValueFilter, alphaValueSrc); int redValueFinal = convert(redValueFilter, redValueSrc); int greenValueFinal = convert(greenValueFilter, greenValueSrc); int blueValueFinal = convert(blueValueFilter, blueValueSrc); int pixel = Color.argb(alphaValueFinal, redValueFinal, greenValueFinal, blueValueFinal); buffOut.put(pixel); } buffOut.rewind(); mutableBitmap.copyPixelsFromBuffer(buffOut); BitmapDrawable drawable = new BitmapDrawable(getResources(), mutableBitmap); imageView.setImageDrawable(drawable); } int convert (int in1, int in2) { //simple multiply for example return in1 * in2 / 255; }

    Read the article

  • How do I construct a 3D model of a room from 2 stereo cameras? What is the determining factor to an

    - by yasumi
    Currently, I have extracted depth points to construct a 3D model from 2 stereo cameras. The methods I have used are openCV graphCut method and a software from http://sourceforge.net/projects/reconststereo/. However, the generated 3D models are not very accurate, which leads me to question: 1) What is the problem with pixel-based method? 2) Should I change my pixel-based method to feature-based or object-recognition-based method? Is there a best method? 3) Are there any other ways to do such reconstruction? Additionally, the depth extracted comes only from 2 images. What if I am turning the camera 360 degrees to obtain a video? Looking forward to suggestion on how to combine this depth information. Thank you very much :)

    Read the article

  • how many color combinations in a 24 bit image

    - by numerical25
    I am reading a book and I am not sure if its a mistake or I am misunderstanding the quote. It reads... Nowadays every PC you can buy has hardware that can render images with at least 16.7 million individual colors. Rather than have an array with thousands of color entries, the images instead contain explicit color values for each pixel. A 24-bit display, of course, uses 24 bits, or 3 bytes per pixel, for color information. This gives 1 byte, or 256 distinct values each, for red, green, and blue. This is generally called true color, because 256^3 (16.7 million) He says 1 byte is equal to 256 distinct values. 1 byte = 8 bits. 8^2 bits = 64 distinct colors right ?? It's not adding up right to me. I know it might be something simple to understand, but I don't understand.

    Read the article

  • how many color combinations in a 24 bit image

    - by numerical25
    I am reading a book and I am not sure if its a mistake or I am misunderstanding the quote. It reads... Nowadays every PC you can buy has hardware that can render images with at least 16.7 million individual colors. Rather than have an array with thousands of color entries, the images instead contain explicit color values for each pixel. A 24-bit display, of course, uses 24 bits, or 3 bytes per pixel, for color information. This gives 1 byte, or 256 distinct values each, for red, green, and blue. This is generally called true color, because 256^3 (16.7 million) He says 1 byte is equal to 256 distinct values. 1 byte = 8 bits. 8^2 bits = 64 combinations of colors right ?? It's not adding up right to me. I know it might be something simple to understand, but I don't understand.

    Read the article

  • Modifying an image with OpenGL ?

    - by chmike
    I have a device to acquire XRay images. Due to some technical constrains, the detector is made of heterogeneous pixel size and multiple tilted and partially overlapping tiles. The image is thus distorted. The detector geometry is known precisely. I need a function converting these distorted images into a flat image with homogeneous pixel size. I have already done this by CPU, but I would like to give a try with OpenGL to use the GPU in a portable way. I have no experience with OpenGL programming, and most of the information I could find on the web was useless for this use. How should I proceed ? How do I do this ? Image size are 560x860 pixels and we have batches of 720 images to process. I'm on Ubuntu.

    Read the article

  • Length of data returned from CGImageGetDataProvider is larger than expected

    - by jcoplan
    I'm loading a grayscale png image and I want to access the underlying pixel data. However after I load get the pixel data via CGImageGetDataProvider, the length of the data returned is longer than expected. CCGDataProviderRef provider = CGDataProviderCreateWithFilename(cStr); CGImageRef image = CGImageCreateWithPNGDataProvider(provider, NULL, FALSE, kCGRenderingIntentDefault); mapWidth = CGImageGetWidth(image); mapHeight = CGImageGetHeight(image); lookupMap = CGDataProviderCopyData(CGImageGetDataProvider(image)); mapWidth comes out to 1804 and mapHeight comes out to 1005. The product of which is 1813020 When I call CFDataGetLength(lookupMap) the response is 1833120. Where are these extra 20100 bytes coming from? Any help here is much appreciated. Am I missing something about the underlying format of the image?

    Read the article

  • GDI+, using DrawImage to draw a transperancy mask of the source image

    - by sold
    Is it possible to draw a transperancy mask of an image (that is, paint all visible pixels with a constant color) using Graphics::DrawImage? I am not looking for manually scanning the image pixel-by-pixel and creating a seperate mask image, I wonder if it's possible to draw one directly from the original image. My guessing is that it should be done with certain manipulations to ImageAttributes, if possible at all. The color of the mask is arbitrary and should be accurate, and it would be a plus if there can be a threshold value for the transparency.

    Read the article

  • Working with image pixels

    - by Mario
    Hey Guys, I'm trying to do a project here, which I want to implement the following: I have a rotation matrix and translation matrix are estimated, now I have an image in a certain location and I want to multiply all the image pixel by the rotation matrix and add the results to the translation matrix..... My issue is how to work with the pixels? I mean how to extract the pixel from the image in order to do the operation that I mentioned above? it's ok to give me the suggestion in either opencv or c++ *I need to know how to do this operation new_p(x,y) = old(x,y)* rotation_matrix + translation_matrix. I'm defining the image like that IplImage(), 3 channel image. For now I need to do the geometrical transformation* Thank you.

    Read the article

  • Resize and center image in html/css?

    - by Derek
    Is there a way I can resize, crop, and center an image using html/css only? (img tag or css sprite) For example if I have a 500x500 pixel image, I want to resize that to a 250x250 pixel image I want to make the actual visible image to be 100x100, but still have the scale of a 250x250 sized image. I want the center of the image to be at a location x,y. Is that possible with only html/css, if not, how do you propose I go about it with javascript? Edit - ????: For (2), say my scaled image is now 200x200, and I want my visible image to be 100x100: So I guess what I mean is I want the scale and resolution of the image to be 200x200 but I want the visible image to be 100x100 or in other words the visible image would be at coordinates x,y: 0,0; 0,100; 100,0; 100,100; of the 200x200 image. Sorry, but I'm not good at explaining this.

    Read the article

  • 2D Engine scrolling on OpenGL via hardware?

    - by drudru
    hi, I'm using OpenGL as the bottom end for a 2D tiling engine. When everything is 2D, it is simple to optimize certain issues. For example, scrolling. If I know a certain section of the screen needs to scroll off the bottom, then I can just blit over that portion. I'm evening moving more than 1 pixel at a time. Without explicit hardware support (think old nintendo hw), this requires a lot of pixel writes. An on chip bitblt would be the next best thing. Essentially, I'm looking at how I can optimize my GL calls to use VRAM texture renders as efficient hardware blits. Is it possible to have GL scroll the framebuffer, or should I just resign myself to double-buffering and re-rendering an entire scene for each frame? Thx

    Read the article

  • find lowest neighbor matlab

    - by user1812719
    I am trying to write a function [offset,coffset]=findLowNhbr(map) that for each pixel in a map finds the eight neighbors to the pixel, and returns two matrices with both the row and column offsets to the lowest neighbor (uses the numbers -1, 0 and 1). Border pixels are given 0 offsets for both the row and column, since they do not have neighbors. Here is what I think the general plan for this function should be: For each point, find the eight nearest neighbors. If the neighbor is lower than the point, return -1 If the neighbor is at the same elevation as the point, return 0 If the neighbor is higher than the point, return +1 Store these offsets in two matrices. I am at a complete loss as to where to start, so any advice or questions are welcome!

    Read the article

  • ALT-TAB Application Icon Pixelated

    - by Red Potato
    When a child window of my application is opened and I view the ALT-TAB menu, the application icon looks pixellated. I assume that Windows uses a low resolution version of the icon (16x16 pixel I think). What can I do that Windows selects the right version which would be 32x32 pixel? I assigned an icon to the window in question that has 16x16, 24x24, 32x32, 48x38 and 256x256 in true color. Please note that VS says in the proterties that 32x32 is used and that it works fine for the main window of my application where I assigned the exact same icon.

    Read the article

  • Concatinate integer arrays iteratively

    - by Ojtwist
    I have a methode in2.getImagesOneDim() which gives me an array of integers, to be more precise the pixel values of an image. Now i want to create one big array with all the pixel values of all the images. Therefore I have to call this method several times. Now I would like to concatenate the previous output to the current output until all images are read. In some kind of pseudo code, where the + is a concatination ... : for (int i = 1; i < 25; i++) { ConArray = ConArray + in2.getImagesOneDim("../images/"+i); } How would I do this in java ?

    Read the article

  • UV texture mapping with perspective correct interpolation

    - by Twodordan
    I am working on a software rasterizer for educational purposes and I am having issues with the texturing. The problem is, only one face of the cube gets correctly textured. The rest are stretched edges: You can see the running program online here. I have used cartesian coordinates, and all I do is interpolate the uv values along the scanlines. The general formula I use for interpolating the uv coordinates is pretty much the one I use for the z-buffering interpolation and looks like this (in this case for horizontal scanlines): u_Slope = (right.u - left.u) / (triangleRight_x - triangleLeft_x); v_Slope = (right.v - left.v) / (triangleRight_x - triangleLeft_x); //[...] new_u = left.u + ((currentX_onScanLine - triangleLeft_x) * u_Slope); new_v = left.v + ((currentX_onScanLine - triangleLeft_x) * v_Slope); Then, when I add each point to the pixel buffer, I restore z and uv: z = (1/z); uv.u = Math.round(uv.u * z *100);//*100 because my texture is 100x100px uv.v = Math.round(uv.v * z *100); Then I turn the u v indexes into one index in order to fetch the correct pixel from the image data (which is a 1 dimensional px array): var index = texture.width * uv.u + uv.v; //and the rest is unimportant imagedata[index].RGBA bla bla The interpolation formula is correct considering the consistency of the texture (including the straight stripes). However, I seem to get quite a lot of 0 values for either u or v. Which is probably why I only get one face right. Furthermore, why is the texture flipped horizontally? (the "1" is flipped) I must get some sleep now, but before I get into further dissecting of every single value to see what goes wrong, Can someone more experienced guess why might this be happening, just by looking at the cube? "I have no idea what I'm doing" (it's my first time implementing a rasterizer). Did I miss an important stage? Thanks for any insight. PS: My UV values are as follows: { u:0, v:0 }, { u:0, v:0.5 }, { u:0.5, v:0.5 }, { u:0.5, v:0 }, { u:0, v:0 }, { u:0, v:0.5 }, { u:0.5, v:0.5 }, { u:0.5, v:0 }

    Read the article

  • How to Enable Google Chrome’s Secret Gold Icon

    - by The Geek
    You might not realize this, but there’s actually another icon hidden inside the Google Chrome executable file—and it’s a high-quality version of the same logo, but golden. Here’s how to use it. If you’re wondering how we got the smooth icon you’re seeing above, it’s because the latest dev channel version switched the icon from the older style.How to Enable Google Chrome’s Secret Gold IconHow to Create an Easy Pixel Art Avatar in Photoshop or GIMPInternet Explorer 9 Released: Here’s What You Need To Know

    Read the article

  • A Perfect Example of Why You Never, Ever Buy a Used Keyboard [Humorous Image]

    - by Asian Angel
    Just go buy a new keyboard…unless you are into masochistic self-torture or other similar pursuits… Note: If you have the stomach for it, you can view the full-size version of the image here. I’m never going to buy a used keyboard ever again. [via Reddit Tech Support Gore] How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere

    Read the article

  • Browse Through Radio Shack’s 1983 Computer Catalog [Scanned Image Set]

    - by Asian Angel
    Are you ready for a blast from the past? Then indulge in a bit of retro fun with this scanned image collection of Radio Shack’s 1983 computer catalog. Anyone up for a shiny ‘new’ TRS-80 computer for Christmas? Radio Shack Catalog RSC-09 Computer Catalog [via BoingBoing] Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • The Glitch [Video]

    - by Asian Angel
    Things are fine in Video Game Land until one day when a soldier encounters an unusual phenomena…his weapon is partially buried in the pavement and undergoing extreme shifting movements. Can Mario and friends save Video Game Land from the Malevolent Glitch or is it game over for everyone?! The Glitch [via Geeks are Sexy] How to Access Your Router If You Forget the Password Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor

    Read the article

  • Decorate Your Desktop with the Rock Stars of Science [Wallpaper]

    - by Jason Fitzpatrick
    This understated desktop wallpaper showcases notable names in science with accompanying icons to represent their contribution to the field. The icons are the work of Megan Lee of Megan Lee Studios–you order prints, t-shirts, and other items with her designs on them here–and the wallpaper arrangement comes to us courtesy of Reddit user wastingtime247–check out the via link below for more arrangements. Science Rock Stars Wallpaper by Megan Lee Studios [via Reddit] How to Access Your Router If You Forget the Password Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor

    Read the article

  • Separating physics and game logic from UI code

    - by futlib
    I'm working on a simple block-based puzzle game. The game play consists pretty much of moving blocks around in the game area, so it's a trivial physics simulation. My implementation, however, is in my opinion far from ideal and I'm wondering if you can give me any pointers on how to do it better. I've split the code up into two areas: Game logic and UI, as I did with a lot of puzzle games: The game logic is responsible for the general rules of the game (e.g. the formal rule system in chess) The UI displays the game area and pieces (e.g. chess board and pieces) and is responsible for animations (e.g. animated movement of chess pieces) The game logic represents the game state as a logical grid, where each unit is one cell's width/height on the grid. So for a grid of width 6, you can move a block of width 2 four times until it collides with the boundary. The UI takes this grid, and draws it by converting logical sizes into pixel sizes (that is, multiplies it by a constant). However, since the game has hardly any game logic, my game logic layer [1] doesn't have much to do except collision detection. Here's how it works: Player starts to drag a piece UI asks game logic for the legal movement area of that piece and lets the player drag it within that area Player lets go of a piece UI snaps the piece to the grid (so that it is at a valid logical position) UI tells game logic the new logical position (via mutator methods, which I'd rather avoid) I'm not quite happy with that: I'm writing unit tests for my game logic layer, but not the UI, and it turned out all the tricky code is in the UI: Stopping the piece from colliding with others or the boundary and snapping it to the grid. I don't like the fact that the UI tells the game logic about the new state, I would rather have it call a movePieceLeft() method or something like that, as in my other games, but I didn't get far with that approach, because the game logic knows nothing about the dragging and snapping that's possible in the UI. I think the best thing to do would be to get rid of my game logic layer and implement a physics layer instead. I've got a few questions regarding that: Is such a physics layer common, or is it more typical to have the game logic layer do this? Would the snapping to grid and piece dragging code belong to the UI or the physics layer? Would such a physics layer typically work with pixel sizes or with some kind of logical unit, like my game logic layer? I've seen event-based collision detection in a game's code base once, that is, the player would just drag the piece, the UI would render that obediently and notify the physics system, and the physics system would call a onCollision() method on the piece once a collision is detected. What is more common? This approach or asking for the legal movement area first? [1] layer is probably not the right word for what I mean, but subsystem sounds overblown and class is misguiding, because each layer can consist of several classes.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >