Search Results

Search found 5806 results on 233 pages for 'graphics'.

Page 73/233 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Zoom image to pixel level

    - by zaf
    For an art project, one of the things I'll be doing is zooming in on an image to a particular pixel. I've been rubbing my chin and would love some advice on how to proceed. Here are the input parameters: Screen: sw - screen width sh - screen height Image: iw - image width ih - image height Pixel: px - x position of pixel in image py - y position of pixel in image Zoom: zf - zoom factor (0.0 to 1.0) Background colour: bc - background colour to use when screen and image aspect ratios are different Outputs: The zoomed image (no anti-aliasing) The screen position/dimensions of the pixel we are zooming to. When zf is 0 the image must fit the screen with correct aspect ratio. When zf is 1 the selected pixel fits the screen with correct aspect ratio. One idea I had was to use something like povray and move the camera towards a big image texture or some library (e.g. pygame) to do the zooming. Anyone think of something more clever with simple pseudo code? To keep it more simple you can make the image and screen have the same aspect ratio. I can live with that. I'll update with more info as its required.

    Read the article

  • Tinting iPhone application screen red

    - by btschumy
    I'm trying to place a red tint on all the screens of my iPhone application. I've experimented on a bitmap and found I get the effect I want by compositing a dark red color onto the screen image using Multiply (kCGBlendModeMultiply). So the question is how to efficiently do this in real time on the iPhone? One dumb way might be to grab a bitmap of the current screen, composite into the bitmap and then write the composited bitmap back to the screen. This seems like it would almost certainly be too slow. In addition, I need some way of knowing when part of the screen has been redrawn so I can update the tinting. I can almost get the effect I want by putting a red, translucent, fullscreen UIView above everything. That tints everything red within further intervention on my part, but the effect is much "muddier" than results from the composite. So do any wizards out there know of some mechanism I can use to automatically composite the red over the app in similar fashion to what the translucent red UIView does?

    Read the article

  • iPhone CALayer Stacking Order

    - by Brian
    I'm using CALayers to draw to a UITableViewCell. I'm trying to figure out how layers are ordered with the content of the UITableViewCell. For instance: I add labels to the UITableViewCell in my cellForRow:atIndexPath method In the drawRect method of UITableViewCell I draw some content using the current context Also, in drawRect I add a few sublayers So what would be the order of these elements. I know I have zPosition on the CALayers but I'm not sure if they are always on top of any subviews of the UITableViewCell. And I'm not sure where the content that is drawn in drawRect comes in the order. Any help or links to documentation would be great. I have read through the Core Animation Programming Guide and didn't see anywhere where this would be answered.

    Read the article

  • What is faster with PictureBox? Many small redraws or complete redraw.

    - by kornelijepetak
    I have a PictureBox (WinMobile 6 WinForm) on which I draw some images. There is a background image that goes in the background and it does not change. However objects that are drawn on the picturebox are moving during the application so I need to refresh the background. Since items that are redrawn fill from 50% to 80% of the surface, the question is which of the two is faster: 1) Redraw only parts of the background image that have been changed (previous+next location of the moving object). 2) Redraw complete background and then draw all the objects in their current position. Now, the reason for asking is because I am not sure how much of processor power is needed for a single drawImage operation and what are the time consuming factors. I am aware if there is almost complete coverage of the background, it would be stupid to redraw portions of it, because by drawing portions I will have drawn the complete picture. But since sometimes only half of the image had changed (some objects remained in their old position), it may (perhaps) be benefitial to redraw only those regions. But I need your insight on this... Thanks.

    Read the article

  • what is the idea behind scaling an image using lanczos?

    - by banister
    Hi, I'm interested in image scaling algorithms and have implemented the bilinear and bicubic methods. However, I have heard of the lanczos and other more sophisticated methods for even higher quality image scaling and I am very curious how they work. Could someone here explain the basic idea behind scaling an image using lanczos (both upscaling and downscaling) and why it results in higher quality? I do have a background in fourier analysis and have done some signal processing stuff in the past, but not with relation to image processing, so don't be afraid to use terms like "frequency response" and such in your answer :) EDIT: I guess what i really want to know is the concept and theory behind using a convolution filter for interpolation. (Note: i have already read the wikipedia article on lanczos resampling but it didn't have nearly enough detail for me) thanks alot!

    Read the article

  • Is there a tool out there that lets you print a colour chart / palette of colours used on a web page

    - by undefined
    I want to print a table of the colours used in a web page that my graphic designer has produced - I have .png files at present and use Fireworks to view them. It would be great if there was a tool that lets you print a table with the colour and hex value so I can easily reference when programming. Anyone come across such a thing? Sounds to me like there should be a firefox extension or similar?

    Read the article

  • Graphic programming over video playback

    - by ignatius
    I want to make some GUI mockup program for video player, so my idea is just to show some menu pictures over real video being playback. I have working program made with C and SDL just to load pictures and make a slideshow, but i don´t know how to put this over video with transparencies. Do you have a hint? ps. i usually program with python and C, so if there is any solution with this two i highly appreciate. Thanks!

    Read the article

  • Why do my buttons stay pressed?

    - by Timmmm
    In my activity I respond to an onClick() by replacing the current view with a new one (setContentView()). For some reason when I do this and then go back to the original view later the button I originally pressed still looks like it is pressed. If you 'refresh' it somehow (e.g. scroll over it with the trackball) then it reverts to the unpressed state. It's as if the button doesn't get the 'touch-up' event. The weird thing is: This only happens on my T-Mobile Pulse (android 1.5). It doesn't happen on the emulator. I've tried calling invalidate()/postInvalidate() on the view when I show it but it doesn't have any effect.

    Read the article

  • How to implement escape Sequence in SDL

    - by Sa'me Smd
    Im trying to use the STL library inside the SDL. but it gives me the error "undeclared identifier" Is there any way i can use "\n"or even cout<<endl; Can the function SDL_WarpMousewhich places the mouse cursor on a desired location on screen help me with this. Because i want to put a tile on the next line sequence. I hope you get the Question. Its very vague and messed up question though (sorry for that). EDIT: void putMap(SDL_Surface* tile, SDL_Surface* screen) { for(int y = 0; y < 21; y++) { for(int x = 0; x < 60; x++) { if(maze[x][y] != '#') { apply_surface( x*10 , y*10 , tile, screen); } } cout<<endl; } } c:\documents and settings\administrator\my documents\visual studio 2008\projects\craptest\craptest\main.cpp(605) : error C2065: 'cout' : undeclared identifier c:\documents and settings\administrator\my documents\visual studio 2008\projects\craptest\craptest\main.cpp(605) : error C2065: 'endl' : undeclared identifier This is my apply_surface funtion. void apply_surface( int x, int y, SDL_Surface* source, SDL_Surface* destination ) { //Make a temporary rectangle to hold the offsets SDL_Rect offset; //Give the offsets to the rectangle offset.x = x; offset.y = y; //Blit the surface SDL_BlitSurface( source, NULL, destination, &offset ); }

    Read the article

  • Fastest possible way to render 480 x 320 background as iPhone OpenGL ES textures

    - by unknownthreat
    I need to display 480 x 320 background image in OpenGL ES. The thing is I experienced a bit of a slow down in iPhone when I use 512 x 512 texture size. So I am finding an optimum case for rendering iPhone resolution size background in OpenGL ES. How should I slice the background in this case to obtain the best possible performance? My main concern is speed. Should I go for 256 x 256 or other texture sizes here?

    Read the article

  • Most efficient algorithm for mesh-level, optimal occlusion culling?

    - by Fredriku73
    I am new to culling. On a first glance, it seems that most occlusion culling algorithms are object-level, not examining single meshes, which would be practical for game rendering. What I am looking for is an algorithm that culls all meshes within a single object that are occluded for a given viewpoint, with high accuracy. It needs to be at least O(n log n), a naive mesh-by-mesh comparison (O(n^2)) is too slow. I notice that the Blender GUI identifies the occluded meshes for you in real-time, even if you work with large objects of 10,000+ meshes. What algorithm is used there, pray tell?

    Read the article

  • Animation effect on C#

    - by Optimizer
    Can someone point me to a C# open source implementaion with a simple image animations. e.g. I feed the input image to animator, and the animation code produces a few dozen of images which if displayed sequentially looks like animation. I am not something extremely fancy - a simple DirectX filter like animations would do. Thank you.

    Read the article

  • Simulating 3D 'cards' with just orthographic rendering

    - by meds
    I am rendering textured quads from an orthographic perspective and would like to simulate 'depth' by modifying UVs and the vertex positions of the quads four points (top left, top right, bottom left, bottom right). I've found if I make the top left and bottom right corners y position be the same I don't get a linear 'skew' but rather a warped one where the texture covering the top triangle (which makes up the quad) seems to get squashed while the bottom triangles texture looks normal. I can change UVs, any of the four points on the quad (but only in 2D space, it's orthographic projection anyway so 3D space won't matter much). So basically I'm trying to simulate perspective on a two dimensional quad in orthographic projection, any ideas? Is it even mathematically possible/feasible? ideally what I'd like is a situation where I can set an x/y rotation as well as a virtual z 'position' (which simulates z depth) through a function and see it internally calclate the positions/uvs to create the 3D effect. It seems like this should all be mathematical where a set of 2D transforms can be applied to each corner of the quad to simulate depth, I just don't know how to make it happen. I'd guess it requires trigonometry or something, I'm trying to crunch the math but not making much progress. here's what I mean: Top left is just the card, center is the card with a y rotation of X degrees and right most is a card with an x and y rotation of different degrees.

    Read the article

  • When should a uniform be used in shader programming?

    - by Phineas
    In a vertex shader, I calculate a vector using only uniforms. Therefore, the outcome of this calculation is the same for all instantiations of the vertex shader. Should I just do this calculation on the CPU and upload it as a uniform? What if I have ten such calculations? If I upload a lot of uniforms in this way, does CPU-GPU communication ever get so slow that recomputing such values in the vertex shader is actually faster?

    Read the article

  • Optimal pixel format for drawing on iPhone?

    - by Felixyz
    Pretty simple question: when doing some pretty intense drawing with CoreGraphics on the iPhone, how can I specify the pixel format to get optimal performance? Is the format that I get from the context via UIGraphicsGetCurrentContext per definition the best one? I know that RGB565 is supposed to be the fastest to use in OpenGL. Does that go for CoreGraphics as well? General advice?

    Read the article

  • Alpha blending colors in .NET Compact Framwork 2.0

    - by Adam Haile
    In the Full .NET framework you can use the Color.FromArgb() method to create a new color with alpha blending, like this: Color blended = Color.FromArgb(alpha, color); or Color blended = Color.FromArgb(alpha, red, green , blue); However in the Compact Framework (2.0 specifically), neither of those prototypes are valid, you only get: Color.FromArgb(int red, int green, int blue); and Color.FromArgb(int val); The first one, obviously, doesn't even let you enter an alpha value, but the documentation for the latter shows that "val" is a 32bit ARGB value (as 0xAARRGGBB as opposed to the standard 24bit 0xRRGGBB), so it would make sense that you could just build the ARGB value and pass it to the function. I tried this with the following: private Color FromARGB(byte alpha, byte red, byte green, byte blue) { int val = (alpha << 24) | (red << 16) | (green << 8) | blue; return Color.FromArgb(val); } But no matter what I do, the alpha blending never works, the resulting color always as full opacity, even when setting the alpha value to 0. Has anyone gotten this to work on Compact Framework?

    Read the article

  • extracting a quadrilateral image to a rectangle

    - by Will
    In the image below, the sign on the side of the van is not face-on to the camera. I want to calculate, as best I can with the pixels I have, what it'd look like face on. I imagine that this is some kind of loop through the x and y axis doing a Bresenham's line on both dimensions at once with some kind of mixing when pixels in the source image overlap - some sub-pixel mixing of some sort? What approaches are there, and how do you mix the pixels? Is there a standard approach for this?

    Read the article

  • Hanging Forms When I Drag And Move My Forms

    - by hosseinsinohe
    I Write a Windows Application By C Sharp. I Use a Picture in Background of my Form (MainForm) And I Use Many Picture in Buttons in This Form,And also I Use Some Panel And Label with Transparent Background Color. My Forms,Panels And Buttons has flicker. I solve this problem by a method in this thread. But Still when other Forms Start over this Form,my Forms hangs when I Drag and Move my Forms over this Form.How can I Solve this Problem to Move And Drags my Forms easily And Speed? Edit:: My Forms Load Data From Access 2007 DataBase file.I Use Datasets,DataGridViews And Other Components to Load And show Data in My Forms.

    Read the article

  • What is the best approach to 2D collision detection on the iPhone?

    - by Magic Bullet Dave
    Been working on this problem of collision detection and there appears to be 3 main approaches I could take: Sprite and mask approach. (AND the overlap of the sprites and check for a non-zero number in the resulting sprite pixel data). Bounding circles, rectangles or polygons. (Create one or more shapes that enclose the sprites and do the basic maths to check for overlaps). Use an existing sprite library. The first approach, even though it would have been the way I would have done it in the old days of 16x16 sprite blocks, it appears that there just isn’t an easy way of getting at the individual image pixel data and/or alpha channel within Quartz (or OPENGL for that matter). Detecting the overlap of the bounding box is easy, but then creating a 3rd image from the overlap and then testing it for pixels is complicated and my gut feel is that even if we could get it to work would be slow. Am I missing something neat here? The second approach involves dividing up our sprites into several polygons and testing them for overlaps. The more polygons the more accurate the collision detection. The benefit is that it is fast, and can be accurate. The downside is it makes the sprite creation more complicated. i.e., we have to create the polygons for each sprite. For speed the best approach is to create a tree of polygons. The 3rd approach I’m not sure about as it involves buying code (or using an open source licence). I am not sure what the best library to use is or whether this would make life easier or give us a problem integrating this into our app. So in short I am favouring the polygon and tree approach and would appreciate you views on this before I go and write lots of code. Best regards Dave

    Read the article

  • How to draw/manage a hexagon grid?

    - by W.N.
    I've read this article: generating/creating hexagon grid in C . But look like both the author and answerer have already abandoned it. v(hexagonSide - hexagonWidth * hexagonWidth): What's hexagonSide and hexagonWidth? Isn't it will < 0 (so square root can't be calculated). And, can I put a hexagon into a rectangle? I need to create a grid like this: One more thing, how can I arrange my array to store data, as well as get which cells are next to one cell? I have never been taught about hexagon, so I know nothing about it, but I can easily learn new thing, so if you can explain or give me a clue, I may do it myself.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >