Search Results

Search found 5873 results on 235 pages for 'raster graphics'.

Page 67/235 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • iPhone SDK Zoom and refresh PDF with Quartz

    - by Ben
    Looking at the QuartzDemo sample application, I love the speed of the PDF rending using quartz alone (that is, without using uiwebview). However, when I'm zooming in the PDF it doesn't seem to become more clear like it does in PDF view. Is there something that I can change to have the same effect when zooming in and out using multitouch? like manipulate the PDF transformation matrix or something? Thanks a bunch. --Ben

    Read the article

  • UIView using Quartz rendering engine to display PDF has poor quality compared to original.

    - by Josh Kerr
    I'm using the quartz rendering engine to display a PDF file on the iphone using the 3.0 SDK. The result is a bit blurry compared to a PDF being shown in a UIWebView. How can I improve the quality in the UIView so that I don't need to rewrite my app to use the UIWebView. I'm using pretty much close to the example code that Apple provides. Here is some of my sample code: CGContextRef gc = UIGraphicsGetCurrentContext(); CGContextSaveGState(gc); CGContextTranslateCTM(gc, 0.0, rect.size.height); CGContextScaleCTM(gc, 1.0, -1.0); CGAffineTransform m = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, rect, 0, false); CGContextConcatCTM(gc, m); CGContextSetGrayFillColor(gc, 1.0, 1.0); CGContextFillRect(gc, rect); CGContextDrawPDFPage(gc, page); CGContextRestoreGState(gc); Apple's tutorial code actually results in a blurry PDF view as well. If you drop the same PDF into a UIWebView you'll see it is actually sharper. Anyone have any ideas? This one issue is holding a two year development project from launching. :(

    Read the article

  • CGBitmapContextCreate issue while trying to resize images

    - by Jeff
    Hello! I'm running into an issue when I try to create a CGContextRef while attempting to resize some images: There are the errors Sun May 16 20:07:18 new-host.home app [7406] <Error>: Unable to create bitmap delegate device Sun May 16 20:07:18 new-host.home app [7406] <Error>: createBitmapContext: failed to create delegate. My code looks like this - (UIImage*) resizeImage:(UIImage*)originalImage withSize:(CGSize)newSize { CGSize originalSize = originalImage.size; CGFloat originalAspectRatio = originalSize.width / originalSize.height; CGImageRef cgImage = nil; int bitmapWidth = newSize.width; int bitmapHeight = newSize.height; CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(nil, bitmapWidth, bitmapHeight, 8, bitmapWidth * 4, colorspace, kCGImageAlphaPremultipliedLast); if (context != nil) { // Flip the coordinate system //CGContextScaleCTM(context, 1.0, -1.0); //CGContextTranslateCTM(context, 0.0, -bitmapHeight); // Black background CGRect rect = CGRectMake(0, 0, bitmapWidth, bitmapHeight); CGContextSetRGBFillColor (context, 0, 0, 0, 1); CGContextFillRect (context, rect); // Resize box to maintain aspect ratio if (originalAspectRatio < 1.0) { rect.origin.y += (rect.size.height - rect.size.width / originalAspectRatio) * 0.5; rect.size.height = rect.size.width / originalAspectRatio; } else { rect.origin.x += (rect.size.width - rect.size.height * originalAspectRatio) * 0.5; rect.size.width = rect.size.height * originalAspectRatio; } CGContextSetInterpolationQuality(context, kCGInterpolationHigh); // Draw image CGContextDrawImage (context, rect, [originalImage CGImage]); // Get image cgImage = CGBitmapContextCreateImage (context); // Release context CGContextRelease(context); } CGColorSpaceRelease(colorspace); UIImage *result = [UIImage imageWithCGImage:cgImage]; CGImageRelease (cgImage); return result; }

    Read the article

  • CSG operations on implicit surfaces with marching cubes [SOLVED]

    - by Mads Elvheim
    I render isosurfaces with marching cubes, (or perhaps marching squares as this is 2D) and I want to do set operations like set difference, intersection and union. I thought this was easy to implement, by simply choosing between two vertex scalars from two different implicit surfaces, but it is not. For my initial testing, I tried with two spheres circles, and the set operation difference. i.e A - B. One circle is moving and the other one is stationary. Here's the approach I tried when picking vertex scalars and when classifying corner vertices as inside or outside. The code is written in C++. OpenGL is used for rendering, but that's not important. Normal rendering without any CSG operations does give the expected result. void march(const vec2& cmin, //min x and y for the grid cell const vec2& cmax, //max x and y for the grid cell std::vector<vec2>& tri, float iso, float (*cmp1)(const vec2&), //distance from stationary circle float (*cmp2)(const vec2&) //distance from moving circle ) { unsigned int squareindex = 0; float scalar[4]; vec2 verts[8]; /* initial setup of the grid cell */ verts[0] = vec2(cmax.x, cmax.y); verts[2] = vec2(cmin.x, cmax.y); verts[4] = vec2(cmin.x, cmin.y); verts[6] = vec2(cmax.x, cmin.y); float s1,s2; /********************************** ********For-loop of interest****** *******Set difference between **** *******two implicit surfaces****** **********************************/ for(int i=0,j=0; i<4; ++i, j+=2){ s1 = cmp1(verts[j]); s2 = cmp2(verts[j]); if((s1 < iso)){ //if inside circle1 if((s2 < iso)){ //if inside circle2 scalar[i] = s2; //then set the scalar to the moving circle } else { scalar[i] = s1; //only inside circle1 squareindex |= (1<<i); //mark as inside } } else { scalar[i] = s1; //inside neither circle } } if(squareindex == 0) return; /* Usual interpolation between edge points to compute the new intersection points */ verts[1] = mix(iso, verts[0], verts[2], scalar[0], scalar[1]); verts[3] = mix(iso, verts[2], verts[4], scalar[1], scalar[2]); verts[5] = mix(iso, verts[4], verts[6], scalar[2], scalar[3]); verts[7] = mix(iso, verts[6], verts[0], scalar[3], scalar[0]); for(int i=0; i<10; ++i){ //10 = maxmimum 3 triangles, + one end token int index = triTable[squareindex][i]; //look up our indices for triangulation if(index == -1) break; tri.push_back(verts[index]); } } This gives me weird jaggies: It looks like the CSG operation is done without interpolation. It just "discards" the whole triangle. Do I need to interpolate in some other way, or combine the vertex scalar values? I'd love some help with this. A full testcase can be downloaded HERE EDIT: Basically, my implementation of marching squares works fine. It is my scalar field which is broken, and I wonder what the correct way would look like. Preferably I'm looking for a general approach to implement the three set operations I discussed above, for the usual primitives (circle, rectangle/square, plane)

    Read the article

  • Simple iPhone drawing app with Quartz 2D

    - by Mr guy 4
    I am making a simple iPhone drawing program as a personal side-project. I capture touches event in a subclassed UIView and render the actual stuff to a seperate CGLayer. After each render, I call [self setNeedsLayout] and in the drawRect: method I draw the CGLayer to the screen context. This all works great and performs decently for drawing rectangles. However, I just want a simple "freehand" mode like a lot of other iPhone applications have. The way I thought to do this was to create a CGMutablePath, and simply: CGMutablePathRef path; -(void)touchBegan { path = CGMutablePathCreate(); } -(void)touchMoved { CGPathMoveToPoint(path,NULL,x,y); CGPathAddLineToPoint(path,NULL,x,y); } -(void)drawRect:(CGContextRef)context { CGContextBeginPath(context); CGContextAddPath(context,path); CGContextStrokePath(context); } However, after drawing for more than 1 second, performance degrades miserably. I would just draw each line into the off-screen CGLayer, if it were not for variable opacity! The less-than-100% opacity causes dots to be left on the screen connecting the lines. I have looked at CGContextSetBlendingMode() but alas I cannot find an answer. Can anyone point me in the right direction? Other iPhone apps are able to do this with very good efficiency.

    Read the article

  • Rotating a UIButton with a custom image (animation)

    - by Tiago
    Hi, I'm trying to rotate a button that I've connected to the controller from the Interface Builder. I've set it's image right from Interface Builder. I'm using this code on the method that runs when I click it: [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:2.0]; [UIView setAnimationRepeatCount:5]; updateButton.transform = CGAffineTransformMakeRotation( M_PI ); [UIView commitAnimations]; But this doesn't do anything. Can this be done, or should I create the button programmatically in order to get it to rotate?

    Read the article

  • Load NSImage into QPixmap or QImage

    - by Thomi
    I have an NSImage pointer from a platform SDK, and I need to load it into Qt's QImage class. To make things easier, I can create a QImage from a CGImageRef by using QPixmap as an intermediate format, like this: CGImageRef myImage = // ... get a CGImageRef somehow. QImage img = QPixmap::fromMacCGImageRef(myImage).toImage(); However, I cannot find a way to convert from an NSImage to a CGImageRef. Several other people have had the same problem, but I have yet to find a solution. There is the CGImageForProposedRect method, but I can't seem to get it to work. I'm currently trying this (img is my NSImage ptr): CGImageRef ir = [img CGImageFirProposedRect:0:0:0]; Any ideas?

    Read the article

  • iPhone SDK: Rendering a CGLayer into an image object

    - by codemercenary
    Hi all, I am trying to add a curved border around an image downloaded and to be displayed in a UITableViewCell. In the large view (ie one image on the screen) I have the following: productImageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:product.image]]; [productImageView setAlpha:0.4]; productImageView.frame = CGRectMake(10.0, 30.0, 128.0, 128.0); CALayer *roundedlayer = [productImageView layer]; [roundedlayer setMasksToBounds:YES]; [roundedlayer setCornerRadius:7.0]; [roundedlayer setBorderWidth:2.0]; [roundedlayer setBorderColor:[[UIColor darkGrayColor] CGColor]]; [self addSubview:productImageView]; In the table view cell, to get it to scroll fast, an image needs to be drawn in the drawRect method of a UIView which is then added to a custom cell. so in drawRect - (void)drawRect:(CGRect)rect { ... point = CGPointMake(boundsX + LEFT_COLUMN_OFFSET, UPPER_ROW_TOP); //CALayer *roundedlayer = [productImageView layer]; //[roundedlayer setMasksToBounds:YES]; //[roundedlayer setCornerRadius:7.0]; //[roundedlayer setBorderWidth:2.0]; //[roundedlayer setBorderColor:[[UIColor darkGrayColor] CGColor]]; //[productImageView drawRect:CGRectMake(boundsX + LEFT_COLUMN_OFFSET, UPPER_ROW_TOP, IMAGE_WIDTH, IMAGE_HEIGHT)]; // [productImageView.image drawInRect:CGRectMake(boundsX + LEFT_COLUMN_OFFSET, UPPER_ROW_TOP, IMAGE_WIDTH, IMAGE_HEIGHT)]; So this works well, but if I remove the comment and try to show the rounded CA layer the scrolling goes really slow. To fix this I suppose I would have to render this image context into a different image object, and store this in an array, then set this image as something like: productImageView.image = (UIImage*)[imageArray objectAtIndex:indexPath.row]; My question is "How do I render this layer into an image?" TIA.

    Read the article

  • How do you draw like a Crayon?

    - by Simucal
    Crayon Physics Deluxe is a commercial game that came out recently. Watch the video on the main link to get an idea of what I'm talking about. It allows you to draw shapes and have them react with proper physics. The goal is to move a ball to a star across the screen using contraptions and shapes you build. While the game is basically a wrapper for the popular Box2D Physics Engine, it does have one feature that I'm curious about how it is implemented. Its drawing looks very much like a Crayon. You can see the texture of the crayon and as it draws it varies in thickness and darkness just like an actual crayon drawing would look like. The background texture is freely available here. Close up of crayon drawing - Note the varying darkness What kind of algorithm would be used to render those lines in a way that looks like a Crayon? Is it a simple texture applied with a random thickness and darkness or is there something more going on?

    Read the article

  • JOGL Double Buffering

    - by Bar
    What is eligible way to implement double buffering in JOGL (Java OpenGL)? I am trying to do that by the following code: ... /** Creating canvas. */ GLCapabilities capabilities = new GLCapabilities(); capabilities.setDoubleBuffered(true); GLCanvas canvas = new GLCanvas(capabilities); ... /** Function display(…), which draws a white Rectangle on a black background. */ public void display(GLAutoDrawable drawable) { drawable.swapBuffers(); gl = drawable.getGL(); gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT); gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); gl.glColor3f(1.0f, 1.0f, 1.0f); gl.glBegin(GL.GL_POLYGON); gl.glVertex2f(-0.5f, -0.5f); gl.glVertex2f(-0.5f, 0.5f); gl.glVertex2f(0.5f, 0.5f); gl.glVertex2f(0.5f, -0.5f); gl.glEnd(); } ... /** Other functions are empty. */ Questions: — When I'm resizing the window, I usually get flickering. As I see it, I have a mistake in my double buffering implementation. — I have doubt, where I must place function swapBuffers — before or after (as many sources says) the drawing? As you noticed, I use function swapBuffers (drawable.swapBuffers()) before drawing a rectangle. Otherwise, I'm getting a noise after resize. So what is an appropriate way to do that? Including or omitting the line capabilities.setDoubleBuffered(true) does not make any effect.

    Read the article

  • perl: tk: a way/widget that allows pixel level control over the output

    - by chhh
    I want something like a canvas, but where i'd be able to manipulate pixels easily in addition to all the provided geometries, that can be drawn on canvas. Is it possible to embed something like GD::Image into a canvas? So then I maybe could make the image transparent and set some pixels in it (GD::Image-setPixel()) positioning it over the canvas? ps: well, that doesn't necessarily have to be perl, as there seem to be bindings for all the libs for most scripting (and not only) languages.

    Read the article

  • Rotating an Image in Silverlight without cropping

    - by Tim Saunders
    I am currently working on a simple Silverlight app that will allow people to upload an image, crop, resize and rotate it and then load it via a webservice to a CMS. Cropping and resizing is done, however rotation is causing some problems. The image gets cropped and is off centre after the rotation. WriteableBitmap wb = new WriteableBitmap(destWidth, destHeight); RotateTransform rt = new RotateTransform(); rt.Angle = 90; rt.CenterX = width/2; rt.CenterY = height/2; //Draw to the Writeable Bitmap Image tempImage2 = new Image(); tempImage2.Width = width; tempImage2.Height = height; tempImage2.Source = rawImage; wb.Render(tempImage2,rt); wb.Invalidate(); rawImage = wb; message.Text = "h:" + rawImage.PixelHeight.ToString(); message.Text += ":w:" + rawImage.PixelWidth.ToString(); //Finally set the Image back MyImage.Source = wb; MyImage.Width = destWidth; MyImage.Height = destHeight; The code above only needs to rotate by 90° at this time so I'm just setting destWidth and destHeight to the height and width of the original image.

    Read the article

  • Any software transforming broken lines into curves?

    - by brilliant
    Hello, do you know of any software that would help me transform a broken line into a curved line? For example, I have an octagon or a heptagon and I want it to be transformed into something resembling a circle. if you know such software, please, let me know. Thank You! Update A: Here is an image from the tutorial given to me by Jamie Keeling (right now it's the first answer below). At least the picture there represents what I want. In that tutorial this process is called "flattening paths". I will try to put that image right here, but if it doesn't get displayed, you can find it here: http://msdn.microsoft.com/en-us/library/ms536364%28v=VS.85%29.aspx The red line in the picture is what I would want to submit, and the blue line is what I would want to get in the end:

    Read the article

  • CGContextSetShadow() - shadow direction reversed between iOS 3.0 and 4.0?

    - by Pascal
    I've been using CGContextSetShadowWithColor() in my Quartz drawing code on the iPhone to generate the "stomped in" look for text and other things (in drawRect: and drawLayer:inContext:). Worked perfectly, but when running the exact same code against iOS 3.2 and now iOS 4.0 I noticed that the shadows are all in the opposite direction. E.g. in the following code I set a black shadow to be 1 pixel above the text, which gave it a "pressed in" look, and now this shadow is 1px below the text, giving it a standard shadow. ... CGContextSetShadowWithColor(context, CGSizeMake(0.f, 1.f), 0.5f, shadowColor); CGContextShowGlyphsAtPoint(context, origin.x, origin.y, glyphs, length); ... Now I don't know whether I am (or have been) doing something wrong or whether there has been a change to the handling of this setting. I haven't applied any transformation that would explain this to me, at least not knowingly. I've flipped the text matrix in one instance, but not in others and this behavior is consistent. Plus I wasn't able to find anything about this in the SDK Release Notes, so it looks like it's probably me. What might be the issue?

    Read the article

  • Pixel Perfect Collision Detection in HTML5 Canvas

    - by Armin Ronacher
    Hi, I want to check a collision between two Sprites in HTML5 canvas. So for the sake of the discussion, let's assume that both sprites are IMG objects and a collision means that the alpha channel is not 0. Now both of these sprites can have a rotation around the object's center but no other transformation in case this makes this any easier. Now the obvious solution I came up with would be this: calculate the transformation matrix for both figure out a rough estimation of the area where the code should test (like offset of both + calculated extra space for the rotation) for all the pixels in the intersecting rectangle, transform the coordinate and test the image at the calculated position (rounded to nearest neighbor) for the alpha channel. Then abort on first hit. The problem I see with that is that a) there are no matrix classes in JavaScript which means I have to do that in JavaScript which could be quite slow, I have to test for collisions every frame which makes this pretty expensive. Furthermore I have to replicate something I already have to do on drawing (or what canvas does for me, setting up the matrices). I wonder if I'm missing anything here and if there is an easier solution for collision detection.

    Read the article

  • "Beveled" Shapes in Quartz 2D

    - by Shaggy Frog
    I'm familiar with some of the basics of Quartz 2D drawing, like drawing basic shapes and gradients and so on, but I'm not sure how to draw a shape with a "beveled" look, like this: Essentially we've got a shine on one corner, and maybe some shading in the opposite corner. I think -- I didn't make this image, although I'd like to be able to approximate it. Any ideas? This is on the iPhone, and I'd like to use built-in frameworks and avoid any external libraries if at all possible.

    Read the article

  • CGBitmapContextCreate: unsupported parameter combination

    - by tarmes
    I'm getting this error when creating a bitmap context: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 24 bits/pixel; 3-component color space; kCGImageAlphaNone; 7936 bytes/row. Here's the code (note that the context is based on the parameters of an existing CGImage: context = CGBitmapContextCreate(NULL, (int)pi.bufferSizeRequired.width, (int)pi.bufferSizeRequired.height, CGImageGetBitsPerComponent(imageRef), 0, CGImageGetColorSpace(imageRef), CGImageGetBitmapInfo(imageRef)); Width is 2626, height is 3981. I've leaving bytesPerRow at zero so that it gets calculated automatically for me, and it's chosen 7936 of its own accord. So, where on Earth is the inconsistency? It's driving me nuts.

    Read the article

  • glTexImage2D behavior on iPhone and other OpenGL ES platforms

    - by spurserh
    Hello, I am doing some work which involves drawing video frames in real time in OpenGL ES. Right now I am using glTexImage2D to transfer the data, in the absence of Pixel Buffer Objects and the like. I suspect that the use of glTexImage2D with one or two frames of look-ahead, that is, using several textures so that the glTexImage2D call can be initiated a frame or two ahead, will allow for sufficient parallelism to play in real time if the system is capable of it at all. Is my assumption true that the driver will handle the actual data transfer to the hardware asynchronously after glTexImage2D returns, assuming I don't try to use the texture or call glFinish/glFlush? Is there a better way to do this with OpenGL ES? Thank you very much, Sean

    Read the article

  • Design crowd sourcing

    - by CVertex
    I'm looking to get a logo designed, but all my designer friends/colleagues are pretty busy. Since I've never done it before, I thought it would be interesting to crowd source the design. Does anyone know of any good design crowd sourcing sites?

    Read the article

  • Emulating old-school sprite flickering (theory and concept)

    - by Jeffrey Kern
    I'm trying to develop an oldschool NES-style video game, with sprite flickering and graphical slowdown. I've been thinking of what type of logic I should use to enable such effects. I have to consider the following restrictions if I want to go old-school NES style: No more than 64 sprites on the screen at a time No more than 8 sprites per scanline, or for each line on the Y axis If there is too much action going on the screen, the system freezes the image for a frame to let the processor catch up with the action From what I've read up, if there were more than 64 sprites on the screen, the developer would only draw high-priority sprites while ignoring low-priority ones. They could also alternate, drawing each even numbered sprite on opposite frames from odd numbered ones. The scanline issue is interesting. From my testing, it is impossible to get good speed on the XBOX 360 XNA framework by drawing sprites pixel-by-pixel, like the NES did. This is why in old-school games, if there were too many sprites on a single line, some would appear if they were cut in half. For all purposes for this project, I'm making scanlines be 8 pixels tall, and grouping the sprites together per scanline by their Y positioning. So, dumbed down I need to come up with a solution that.... 64 sprites on screen at once 8 sprites per 'scanline' Can draw sprites based on priority Can alternate between sprites per frame Emulate slowdown Here is my current theory First and foremost, a fundamental idea I came up with is addressing sprite priority. Assuming values between 0-255 (0 being low), I can assign sprites priority levels, for instance: 0 to 63 being low 63 to 127 being medium 128 to 191 being high 192 to 255 being maximum Within my data files, I can assign each sprite to be a certain priority. When the parent object is created, the sprite would randomly get assigned a number between its designated range. I would then draw sprites in order from high to low, with the end goal of drawing every sprite. Now, when a sprite gets drawn in a frame, I would then randomly generate it a new priority value within its initial priority level. However, if a sprite doesn't get drawn in a frame, I could add 32 to its current priority. For example, if the system can only draw sprites down to a priority level of 135, a sprite with an initial priority of 45 could then be drawn after 3 frames of not being drawn (45+32+32+32=141) This would, in theory, allow sprites to alternate frames, allow priority levels, and limit sprites to 64 per screen. Now, the interesting question is how do I limit sprites to only 8 per scanline? I'm thinking that if I'm sorting the sprites high-priority to low-priority, iterate through the loop until I've hit 64 sprites drawn. However, I shouldn't just take the first 64 sprites in the list. Before drawing each sprite, I could check to see how many sprites were drawn in it's respective scanline via counter variables . For example: Y-values between 0 to 7 belong to Scanline 0, scanlineCount[0] = 0 Y-values between 8 to 15 belong to Scanline 1, scanlineCount[1] = 0 etc. I could reset the values per scanline for every frame drawn. While going down the sprite list, add 1 to the scanline's respective counter if a sprite gets drawn in that scanline. If it equals 8, don't draw that sprite and go to the sprite with the next lowest priority. SLOWDOWN The last thing I need to do is emulate slowdown. My initial idea was that if I'm drawing 64 sprites per frame and there's still more sprites that need to be drawn, I could pause the rendering by 16ms or so. However, in the NES games I've played, sometimes there's slowdown if there's not any sprite flickering going on whereas the game moves beautifully even if there is some sprite flickering. Perhaps give a value to each object that uses sprites on the screen (like the priority values above), and if the combined values of all objects w/ sprites surpass a threshold, introduce the sprite flickering? IN CONCLUSION... Does everything I wrote actually sound legitimate and could work, or is it a pipe dream? What improvements can you all possibly think with this game programming theory of mine?

    Read the article

  • How to set a Transparent Background of JPanel

    - by Imran
    Hi, I need to know if a JPanel`s bacground can be set to Transparent? My frame is has two Jpanels Image Panel and Feature Panel, Feature Panel is overlapping Image Panel, the Image Panel is working as a background and it is loading image from a remote Url, now I want to draw shaps on Feature Panel , but now Image Panel cannot be seen due to Feature Panel's background color. I need to make Feature Panel background transparent while still drawing its shapes and i want Image Panel to be visible since it is doing tiling and cache function of images. I need to seperate the image drawing and shape drawing thats why I`m using two jPanels! is there anyway the overlapping Jpanel have a transparent background? thanks

    Read the article

  • Allow horizontal scrolling only in the core-plot barchart?

    - by Madhup
    Hi all, I am using core-plot lib to draw bar charts in my app like this My problem is that i want the enabling of grapgh movement only in horizontal direction so that I can see the records for a long period of time, But the problem is that i just wnt to keep the y axis fixed to its place, How can i do this? Waiting for help....

    Read the article

  • What way to use the CGContext to draw is suitable?

    - by Tattat
    I know that the CGContext cannot call it to draw directly, and it needs to fill the drawing logic in the drawInContext, and call the CGContext to draw using "setNeedsDisplay", so, I designed a cmd to execute, but it cause some problems... like this : http://stackoverflow.com/questions/2617827/why-i-cant-draw-in-a-loop-using-uiview-in-iphone I think the CGContext is very different from my previous programming experience....(I used HTML5 canvas, that allow me add more details, after I draw, so do the Java Swing) Actually, I want to know what is the suitable to implement these kind of thing in Apples' programmer mind. Thz.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >