Search Results

Search found 5873 results on 235 pages for 'raster graphics'.

Page 71/235 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Equivalent of CGPoint with integers?

    - by Ivan Vucica
    Cheers, I like strict typing in C. Therefore, I don't want to store a 2D vector of floats if I specifically need integers. Is there an Apple-provided equivalent of CGPoint which stores data as integers? I've implemented my type Vector2i and its companion function Vector2iMake() à la CGPoint, but something deep in me screams that Apple was there already.

    Read the article

  • OpenGL Color Interpolation across vertices

    - by gutsblow
    Right now, I have more than 25 vertices that form a model. I want to interpolate color linearly between the first and last vertex. The Problem is when I write the following code glColor3f(1.0,0.0,0.0); vertex3f(1.0,1.0,1.0); vertex3f(0.9,1.0,1.0); . .`<more vertices>; glColor3f(0.0,0.0,1.0); vertex3f(0.0,0.0,0.0); All the vertices except that last one are red. Now I am wondering if there is a way to interpolate color across these vertices without me having to manually interpolate color natively (like how opengl does it automatically) at each vertex since, I will be having a lot more number of colors at various vertices. Any help would be extremely appreciated. Thank you!

    Read the article

  • How to write text(using CGContextShowTextAtPoint) on graph x and y-axis intervals points?

    - by Rajendra Bhole
    I developed graph using NSObject class and using CGContext method. The following code displaying dynamically in X and Y-axis intervals, CGContextSetRGBStrokeColor(ctx, 2.0, 2.0, 2.0, 1.0); CGContextSetLineWidth(ctx, 2.0); CGContextMoveToPoint(ctx, 30.0, 200.0); CGContextAddLineToPoint(ctx, 30.0, 440.0); for(float y = 400.0; y >= 200.0; y-=30) { CGContextSetRGBStrokeColor(ctx, 2.0, 2.0, 2.0, 1.0); CGContextMoveToPoint(ctx, 28, y); CGContextAddLineToPoint(ctx, 32, y); CGContextStrokePath(ctx); //CGContextClosePath(ctx); } CGContextMoveToPoint(ctx, 10, 420.0); CGContextAddLineToPoint(ctx, 320, 420.0); //CGContextAddLineToPoint(ctx, 320.0, 420.0); //CGContextStrokePath(ctx); for(float x = 60.0; x <= 260.0; x+=30) { CGContextSetRGBStrokeColor(ctx, 2.0, 2.0, 2.0, 1.0); CGContextMoveToPoint(ctx, x, 418.0); CGContextAddLineToPoint(ctx, x, 422.0); CGContextStrokePath(ctx); CGContextClosePath(ctx); } I want to write the dynamic text on the X and Y-axis lines near the intervals (like X-axis is denoting number of days per week and Y-axis denoting something per someting)? Thanks.

    Read the article

  • Is there a tool out there that lets you print a colour chart / palette of colours used on a web page

    - by undefined
    I want to print a table of the colours used in a web page that my graphic designer has produced - I have .png files at present and use Fireworks to view them. It would be great if there was a tool that lets you print a table with the colour and hex value so I can easily reference when programming. Anyone come across such a thing? Sounds to me like there should be a firefox extension or similar?

    Read the article

  • correcting fisheye distortion programmatically

    - by Will
    I have some points that describe positions in a picture taken with a fisheye lens. I've found this description of how to generate a fisheye effect, but not how to reverse it. How do you calculate the radial distance from the centre to go from fisheye to rectilinear? My function stub looks like this: Point correct_fisheye(const Point& p,const Size& img) { // to polar const Point centre = {img.width/2,img.height/2}; const Point rel = {p.x-centre.x,p.y-centre.y}; const double theta = atan2(rel.y,rel.x); double R = sqrt((rel.x*rel.x)+(rel.y*rel.y)); // fisheye undistortion in here please //... change R ... // back to rectangular const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta)); fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y); return ret; } Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?

    Read the article

  • app burns numbers into iPad screens, how can I prevent this?

    - by Andrew Johnson
    EDIT: My code for this is actually open source, if anyone would be able to look and comment. Things I can think of that might be an issue: using a custom font, using bright green, updating the label too fast? The repo is: https://github.com/andrewljohnson/StopWatch-of-Gaia The class for the time label: https://github.com/andrewljohnson/StopWatch-of-Gaia/blob/master/src/SWPTimeLabel.m The class that runs the timer to update the label: https://github.com/andrewljohnson/StopWatch-of-Gaia/blob/master/src/SWPViewController.m ============= My StopWatch app reportedly screen burns a number of iPads, for temporary periods. Does anyone have a suggestion about how I might prevent this screen persistence? Some known workaround to blank the pixels occasionally? I get emails all the time about it, and you can see numerous reviews here: http://itunes.apple.com/us/app/stopwatch+-timer-for-gym-kitchen/id518178439?mt=8 Apple can not advise me. I sent an email to appreview, and I was told to file a technical support request (DTS). When I filled the DTS, they told me it was not a code issue, and when I further asked for help from DTS, a "senior manager" told me that this was not an issue Apple knew about. He further advised me to file a bug with the Apple Radar bug tracker if I considered it to be a real issue. I filed the Radar bug a few weeks ago, but it has not been acknowledged. Updated radar link for Apple employees, per commenter's notes rdar://12173447

    Read the article

  • Encode complex number as RGB pixel and back

    - by Vi
    How is it better to encode a complex number into RGB pixel and vice versa? Probably (logarithm of) an absolute value goes to brightness and an argument goes to hue. Desaturated pixes should receive randomized argument in reverse transformation. Something like: 0 - (0,0,0) 1 - (255,0,0) -1 - (0,255,255) 0.5 - (128,0,0) i - (255,255,0) -i - (255,0,255) (0,0,0) - 0 (255,255,255) - e^(i * random) (128,128,128) - 0.5 * e^(i *random) (0,128,128) - -0.5 Are there ready-made formulas for that? Edit: Looks like I just need to convert RGB to HSB and back. Edit 2: Existing RGB - HSV converter fragment: if (hsv.sat == 0) { hsv.hue = 0; // ! return hsv; } I don't want 0. I want random. And not just if hsv.sat==0, but if it is lower that it should be ("should be" means maximum saturation, saturation that is after transformation from complex number).

    Read the article

  • Optimizing drawing on UITableViewCell

    - by Brian
    I am drawing content to a UITableViewCell and it is working well, but I'm trying to understand if there is a better way of doing this. Each cell has the following components: Thumbnail on the left side - could come from server so it is loaded async Title String - variable length so each cell could be different height Timestamp String Gradient background - the gradient goes from the top of the cell to the bottom and is semi-transparent so that background colors shine through with a gloss It currently works well. The drawing occurs as follows: UITableViewController inits/reuses a cell, sets needed data, and calls [cell setNeedsDisplay] The cell has a CALayer for the thumbnail - thumbnailLayer In the cell's drawRect it draws the gradient background and the two strings The cell's drawRect it then calls setIcon - which gets the thumbnail and sets the image as the contents of the thumbnailLayer. If the image is not found locally, it sets a loading image as the contents of the thumbnailLayer and asynchronously gets the thumbnail. Once the thumbnail is received, it is reset by calling setIcon again & resets the thumbnailLayer.contents This all currently works, but using Instruments I see that the thumbnail is compositing with the gradient. I have tried the following to fix this: setting the cell's backgroundView to a view whose drawRect would draw the gradient so that the cell's drawRect could draw the thumbnail and using setNeedsDisplayInRect would allow me to only redraw the thumbnail after it loaded --- but this resulted in the backgroundView's drawing (gradient) covering the cell's drawing (text). I would just draw the thumbnail in the cell's drawRect, but when setNeedsDisplay is called, drawRect will just overlap another image and the loading image may show through. I would clear the rect, but then I would have to redraw the gradient. I would try to draw the gradient in a CAGradientLayer and store a reference to it, so I can quickly redraw it, but I figured I'd have to redraw the gradient if the cell's height changes. Any ideas? I'm sure I'm missing something so any help would be great.

    Read the article

  • About updating a View using in iPhone using Objective C.

    - by Tattat
    I have a scene, called testScene, it works like this: @interface testScene : myScene { IBOutlet UIView *subview; IBOutlet UIView *drawingCanvasView; IBOutlet UIButton *update; } - (void)updateDrawingCanvas: (id) sender; and when the user click the button, update, it will run the updateDrawingCanvas method. So, I have a drawingCanvasView, which gave a drawingCanvas.h, and .m, it like this: #import <UIKit/UIKit.h> @interface DrawingCanvasView : UIView { CGImageRef image; } -(void)setNeedsDisplayInRect:(CGContextRef)context; @end In the DrawingCanvasView, I have a drawRect method like this: CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, 2.0); CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor); CGContextMoveToPoint(context, 0.0f, 0.0f); CGContextAddLineToPoint(context, 100.0f, 100.0f); CGContextStrokePath(context); And I want the user click the button, and execute this, so I added a new method called setNeedsDisplayInRect: CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, 2.0); CGContextSetStrokeColorWithColor(context, [UIColor yellowColor].CGColor); CGContextMoveToPoint(context, 0.0f, 0.0f); CGContextAddLineToPoint(context, 200.0f, 200.0f); CGContextStrokePath(context); But I can't called that in my updateDrawingCanvas method, it work like this: - (void)updateDrawingCanvas: (id) sender{ NSLog(@"loaded"); [DrawingCanvasView setNeedsDisplayInRect:UIGraphicsGetCurrentContext()]; } It my logic / concept right? or something I did wrong, thx.

    Read the article

  • solving origin of a vectors

    - by Mike
    I have two endpoints (xa,ya) and (xb,yb) of two vectors, respectively a and b, originating from a same point (xo, yo). Also, I know that |a|=|b|+s, where s is a constant. I tried to compute the origin (xo, yo) but seem to fail at some point. How to solve this?

    Read the article

  • draw ios quartz 2d path with a varying alpha component

    - by Giovanni
    Hi, I'd like to paint some Bezier curves with the alpha channel that is changing during the curve painting. Right now I'm able to draw bezier paths, with a fixed alpha channel. What I'd like to do is to draw a single bezier curve that uses a certain value of the alpha channel for the first n points of the path another, alpha value for the subsequent m points and so on. The code I'm using for drawing bezier path is: CGContextSetStrokeColorWithColor(context, curva.color.CGColor); .... CGContextAddCurveToPoint(context, cp1.x, cp1.y, cp2.x, cp2.y, endPoint.x, endPoint.y); .... CGContextStrokePath(context); Is there a way to achieve what I'm describing? Many thanks, Giovanni

    Read the article

  • Is it possible to use AnimateWindow with AW_BLEND when using a layered window?

    - by wkf
    I am displaying a window using UpdateLayeredWindow and would like to add transition animations. AnimateWindow works if I use the slide or roll effects (though there is some flickering). However, when I try to use AW_BLEND to produce a fade effect, I not only lose any translucency after the animation (per-pixel and on the entire image), but a default window border also appears. Is there a way to prevent the border from appearing?

    Read the article

  • What are some good algorithms for drawing lines between graph nodes?

    - by ApplePieIsGood
    What I'm specifically grappling with is not just the layout of a graph, but when a user selects a graph node and starts to drag it around the screen area, the line has to constantly be redrawn to reflect what it would look like if the user were to release the node. I suppose this is part of the layout algorithm? Also some applications get a bit fancy and don't simply draw the line in a nice curvy way, but also bend the line around the square shaped node in almost right angles. See attached image and keep in mind that as a node is dragged, the line is drawn as marching ants, and re-arranged nicely, while retaining its curved style.

    Read the article

  • ImageMagick Reflection

    - by dbruns
    Brief: convert ( -size 585x128 gradient: ) NewImage.png How do I change the above ImageMagick command so it takes the width and height from an existing image? I need it to remain a one line command. Details: I'm trying to programatically create an image reflection using ImageMagick. The effect I am looking for is similar to what you would see when looking at an object on the edge of a pool of water. There is a pretty good thread on what I am trying to do here but the solution isn't exactly what I am looking for. Since I will be calling ImageMagick from a C#.Net application I want to use one call without any temp files and return the image through stdout. So far I have this... convert OriginalImage.png ( OriginalImage.png -flip -blur 3x5 \ -crop 100%%x30%%+0+0 -negate -evaluate multiply 0.3 \ -negate ( -size 585x128 gradient: ) +matte -compose copy_opacity -composite ) -append NewImage.png This works ok but doesn't give me the exact fade I am looking for. Instead of a nice solid fade from top to bottom it is giving me a fade from top left to bottom right. I added the (-negate -evaluate multiply 0.3 -negate) section in to lighten it up a bit more since I wasn't getting the fade I wanted. I also don't want to have to hard code in the size of the image when creating the gradient ( -size 585x128 gradient: ) I'm also going to want to keep the original image's transparency if possible. To go to stdout I plan on replacing "NewImage.png" with "-"

    Read the article

  • How can I call the iPhone to draw using other method?

    - by Tattat
    I have a view with a class called "drawingViewController", and I have the drawRect method: - (void)drawRect:(CGRect)rect { CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, 2.0); CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor); CGContextMoveToPoint(context, 0.0f, 0.0f); CGContextAddLineToPoint(context, 100.0f, 100.0f); CGContextStrokePath(context); } But I wanna to define some other drawing method, but it did't work, how can I do so apart from calling drawRect method? thz in advance.

    Read the article

  • background colour in opengl

    - by lego69
    I want to change background color of the window after pressing the button, but my program doesn't work, can somebody tell me why, thanks in advance int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(800, 600); glutInitWindowPosition(300,50); glutCreateWindow("GLRect"); glClearColor(1.0f, 0.0f, 0.0f, 1.0f); <--- glutDisplayFunc(RenderScene); glutReshapeFunc(ChangeSize); glutMainLoop(); system("pause"); glClearColor(0.0f, 1.0f, 0.0f, 1.0f); <--- return 0; }

    Read the article

  • Quartz2d vector images vs OpenGL vector description?

    - by tbarbe
    How big of a difference is the description language of Quartz2d to OpenGL ES? It seems they are similar in description power... except that Quartz is mostly 2d and that OpenGL is out of the box 3d ( but can be made 2d focused ). Are the mappings from 2dQuartz to 2d OpenGL ES that different? Im sure there must be differences in some specific features that might be handled differently on one vs another... but to do a translator? Anyone have experience with both OpenGL and Quartz2d have some insights?

    Read the article

  • OpenGL Video RAM Limits

    - by Tamir
    I have been trying to make a Cross-platform 2D Online Game, and my maps are made of tiles. My tileset, which I render the tiles from, is quite huge. I wanted to know how can I disable hardware rendering, or at least making it more capable. Hence, I wanted to know what are the basic limits of the video ram, as far as I know, Direct3D has a texture size limits (by that I don't mean the power-of-two texture sizes).

    Read the article

  • What is the equivalent to Java's canvas object in C#?

    - by Winston
    I'm working on creating a basic application that will let a user draw (using a series of points) and I plan to do something with these points. If this were Java, I think I would probably use a canvas object and some Java2D calls to draw what I want. All the tutorials I've read on C#/Drawing involve writing your own paint method and adding it to the paint event for the form. However, I'm interested in having some traditional Form controls as well and I don't want to be drawing over them. So, is there a "Canvas" object where I can constrain what I'm drawing on? Also, is WinForms a poor choice given this use case? Would WPF have more features that would help enable me to do what I want? Or Silverlight?

    Read the article

  • Ray Generation Inconsistency

    - by Myx
    I have written code that generates a ray from the "eye" of the camera to the viewing plane some distance away from the camera's eye: R3Ray ConstructRayThroughPixel(...) { R3Point p; double increments_x = (lr.X() - ul.X())/(double)width; double increments_y = (ul.Y() - lr.Y())/(double)height; p.SetX( ul.X() + ((double)i_pos+0.5)*increments_x ); p.SetY( lr.Y() + ((double)j_pos+0.5)*increments_y ); p.SetZ( lr.Z() ); R3Vector v = p-camera_pos; R3Ray new_ray(camera_pos,v); return new_ray; } ul is the upper left corner of the viewing plane and lr is the lower left corner of the viewing plane. They are defined as follows: R3Point org = scene->camera.eye + scene->camera.towards * radius; R3Vector dx = scene->camera.right * radius * tan(scene->camera.xfov); R3Vector dy = scene->camera.up * radius * tan(scene->camera.yfov); R3Point lr = org + dx - dy; R3Point ul = org - dx + dy; Here, org is the center of the viewing plane with radius being the distance between the viewing plane and the camera eye, dx and dy are the displacements in the x and y directions from the center of the viewing plane. The ConstructRayThroughPixel(...) function works perfectly for a camera whose eye is at (0,0,0). However, when the camera is at some different position, not all needed rays are produced for the image. Any suggestions what could be going wrong? Maybe something wrong with my equations? Thanks for the help.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >