Search Results

Search found 211 results on 9 pages for 'cgpoint'.

Page 9/9 | < Previous Page | 5 6 7 8 9 

  • Marrying Core Animation with OpenGL ES

    - by Ole Begemann
    Edit: I suppose instead of the long explanation below I might also ask: Sending -setNeedsDisplay to an instance of CAEAGLLayer does not cause the layer to redraw (i.e., -drawInContext: is not called). Instead, I get this console message: <GLLayer: 0x4d500b0>: calling -display has no effect. Is there a way around this issue? Can I invoke -drawInContext: when -setNeedsDisplay is called? Long explanation below: I have an OpenGL scene that I would like to animate using Core Animation animations. Following the standard approach to animate custom properties in a CALayer, I created a subclass of CAEAGLLayer and defined a property sceneCenterPoint in it whose value should be animated. My layer also holds a reference to the OpenGL renderer: #import <UIKit/UIKit.h> #import <QuartzCore/QuartzCore.h> #import "ES2Renderer.h" @interface GLLayer : CAEAGLLayer { ES2Renderer *renderer; } @property (nonatomic, retain) ES2Renderer *renderer; @property (nonatomic, assign) CGPoint sceneCenterPoint; I then declare the property @dynamic to let CA create the accessors, override +needsDisplayForKey: and implement -drawInContext: to pass the current value of the sceneCenterPoint property to the renderer and ask it to render the scene: #import "GLLayer.h" @implementation GLLayer @synthesize renderer; @dynamic sceneCenterPoint; + (BOOL) needsDisplayForKey:(NSString *)key { if ([key isEqualToString:@"sceneCenterPoint"]) { return YES; } else { return [super needsDisplayForKey:key]; } } - (void) drawInContext:(CGContextRef)ctx { self.renderer.centerPoint = self.sceneCenterPoint; [self.renderer render]; } ... (If you have access to the WWDC 2009 session videos, you can review this technique in session 303 ("Animated Drawing")). Now, when I create an explicit animation for the layer on the keyPath @"sceneCenterPoint", Core Animation should calculate the interpolated values for the custom properties and call -drawInContext: for each step of the animation: - (IBAction)animateButtonTapped:(id)sender { CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:@"sceneCenterPoint"]; animation.duration = 1.0; animation.fromValue = [NSValue valueWithCGPoint:CGPointZero]; animation.toValue = [NSValue valueWithCGPoint:CGPointMake(1.0f, 1.0f)]; [self.glView.layer addAnimation:animation forKey:nil]; } At least that is what would happen for a normal CALayer subclass. When I subclass CAEAGLLayer, I get this output on the console for each step of the animation: 2010-12-21 13:59:22.180 CoreAnimationOpenGL[7496:207] <GLLayer: 0x4e0be20>: calling -display has no effect. 2010-12-21 13:59:22.198 CoreAnimationOpenGL[7496:207] <GLLayer: 0x4e0be20>: calling -display has no effect. 2010-12-21 13:59:22.216 CoreAnimationOpenGL[7496:207] <GLLayer: 0x4e0be20>: calling -display has no effect. 2010-12-21 13:59:22.233 CoreAnimationOpenGL[7496:207] <GLLayer: 0x4e0be20>: calling -display has no effect. ... So it seems that, possibly for performance reasons, for OpenGL layers, -drawInContext: is not getting called because these layers do not use the standard -display method to draw themselves. Can anybody confirm that? Is there a way around it? Or can I not use the technique I laid out above? This would mean I would have to implement the animations manually in the OpenGL renderer (which is possible but not as elegant IMO).

    Read the article

  • Scaling MKMapView Annotations relative to the zoom level

    - by Jonathan
    The Problem I'm trying to create a visual radius circle around a annonation, that remains at a fixed size in real terms. Eg. So If i set the radius to 100m, as you zoom out of the Map view the radius circle gets progressively smaller. I've been able to achieve the scaling, however the radius rect/circle seems to "Jitter" away from the Pin Placemark as the user manipulates the view. The Manifestation Here is a video of the behaviour. The Implementation The annotations are added to the Mapview in the usual fashion, and i've used the delegate method on my UIViewController Subclass (MapViewController) to see when the region changes. -(void)mapView:(MKMapView *)pMapView regionDidChangeAnimated:(BOOL)animated{ //Get the map view MKCoordinateRegion region; CGRect rect; //Scale the annotations for( id<MKAnnotation> annotation in [[self mapView] annotations] ){ if( [annotation isKindOfClass: [Location class]] && [annotation conformsToProtocol:@protocol(MKAnnotation)] ){ //Approximately 200 m radius region.span.latitudeDelta = 0.002f; region.span.longitudeDelta = 0.002f; region.center = [annotation coordinate]; rect = [[self mapView] convertRegion:foo toRectToView: self.mapView]; if( [[[self mapView] viewForAnnotation: annotation] respondsToSelector:@selector(setRadiusFrame:)] ){ [[[self mapView] viewForAnnotation: annotation] setRadiusFrame:rect]; } } } The Annotation object (LocationAnnotationView)is a subclass of the MKAnnotationView and it's setRadiusFrame looks like this -(void) setRadiusFrame:(CGRect) rect{ CGPoint centerPoint; //Invert centerPoint.x = (rect.size.width/2) * -1; centerPoint.y = 0 + 55 + ((rect.size.height/2) * -1); rect.origin = centerPoint; [self.radiusView setFrame:rect]; } And finally the radiusView object is a subclass of a UIView, that overrides the drawRect method to draw the translucent circles. setFrame is also over ridden in this UIView subclass, but it only serves to call [UIView setNeedsDisplay] in addition to [UIView setFrame:] to ensure that the view is redrawn after the frame has been updated. The radiusView object's (CircleView) drawRect method looks like this -(void) drawRect:(CGRect)rect{ //NSLog(@"[CircleView drawRect]"); [self setBackgroundColor:[UIColor clearColor]]; //Declarations CGContextRef context; CGMutablePathRef path; //Assignments context = UIGraphicsGetCurrentContext(); path = CGPathCreateMutable(); //Alter the rect so the circle isn't cliped //Calculate the biggest size circle if( rect.size.height > rect.size.width ){ rect.size.height = rect.size.width; } else if( rect.size.height < rect.size.width ){ rect.size.width = rect.size.height; } rect.size.height -= 4; rect.size.width -= 4; rect.origin.x += 2; rect.origin.y += 2; //Create paths CGPathAddEllipseInRect(path, NULL, rect ); //Create colors [[self areaColor] setFill]; CGContextAddPath( context, path); CGContextFillPath( context ); [[self borderColor] setStroke]; CGContextSetLineWidth( context, 2.0f ); CGContextSetLineCap(context, kCGLineCapSquare); CGContextAddPath(context, path ); CGContextStrokePath( context ); CGPathRelease( path ); //CGContextRestoreGState( context ); } Thanks for bearing with me, any help is appreciated. Jonathan

    Read the article

  • Advance way of using UIView convertRect method to detect CGRectIntersectsRect multiple times

    - by Chris
    I recently asked a question regarding collision detection within subviews, with a perfect answer. I've come to the last point in implementing the collision on my application but I've come across a new issue. Using convertRect was fine getting the CGRect from the subView. I needed it to be a little more complex as it wasn't exactly rectangles that needed to be detected. on XCode I created an abstract class called TileViewController. Amongst other properties it has a IBOutlet UIView *detectionView; I now have multiple classes that inherit from TileViewController, and each class there are multiple views nested inside the detectionView which I have created using Interface Builder. The idea is an object could be a certain shape or size, I've programatically placed these 'tiled' detection points bottom center of each object. A user can select an item and interactive with it, in this circumstance move it around. Here is my touchesMoved method -(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{ UITouch *touch = [[event allTouches] anyObject]; CGPoint location = [touch locationInView:touch.view]; interactiveItem.center = location; // The ViewController the user has chosen to interact with interactiveView.view.center = location; // checks if the user has selected an item to interact with if (interactiveItem) { // First get check there is more then 1 item in the collection NSUInteger assetCount = [itemViewCollection count]; //NSMutableArray that holds the ViewControllers int detectionCount = 0; // To count how many times a CGRectIntersectsRect occured UIView *parentView = self.view; // if there is more then 1 item begin collision detection if (assetCount > 1) { for (TileViewController *viewController in itemViewCollection) { if (viewController.view.tag != interactiveView.view.tag) { if (viewController.detectionView.subviews) { for (UIView *detectView in viewController.detectionView.subviews) { CGRect viewRect; viewRect = [detectView convertRect:[detectView frame] toView:parentView]; // I could have checked to see if the below has subViews but didn't - In my current implementation it does anyway for (UIView *detectInteractView in interactiveView.detectionView.subviews) { CGRect interactRect; interactRect = [detectInteractView convertRect:[detectInteractView frame] toView:parentView]; if (CGRectIntersectsRect(viewRect, interactRect) == 1) { NSLog(@"collision detected"); [detectView setBackgroundColor:[UIColor blueColor]]; [detectInteractView setBackgroundColor:[UIColor blueColor]]; detectionCount++; } else { [detectView setBackgroundColor:[UIColor yellowColor]]; [detectInteractView setBackgroundColor:[UIColor yellowColor]]; } } } } } } // Logic if no items collided if (detectionCount == 0) { NSLog(@"Do something"); } } } } Now the method itself works to an extent but I don't think it's working with the nested values properly as the detection is off. A simplified version of this method works - Using CGRectIntersectsRect on the detectionView itself so I'm wondering if I'm looping through and checking the views correctly? I wasn't sure whether it was comparing in the same view but I suspect it is, I did modify the code slightly at one point, rather then comparing the values in self.view I took the viewController.detectView's UIViews into the interactiveView.detectView but the outcome was the same. It's rigged so the subviews change colour, but they change colour when they are not even touching, and when they do touch the wrong UIviews are changing colour Many thanks in advance

    Read the article

  • How do I center a UIImageView within a full-screen UIScrollView?

    - by Sebastian Celis
    In my application, I would like to present the user with a full-screen photo viewer much like the one used in the Photos app. This is just for a single photo and as such should be quite simple. I just want the user to be able to view this one photo with the ability to zoom and pan. I have most of it working. And, if I do not center my UIImageView, everything behaves perfectly. However, I really want the UIImageView to be centered on the screen when the image is sufficiently zoomed out. I do not want it stuck to the top-left corner of the scroll view. Once I attempt to center this view, my vertical scrollable area appears to be greater than it should be. As such, once I zoom in a little, I am able to scroll about 100 pixels past the top of the image. What am I doing wrong? @interface MyPhotoViewController : UIViewController <UIScrollViewDelegate> { UIImage* photo; UIImageView *imageView; } - (id)initWithPhoto:(UIImage *)aPhoto; @end @implementation MyPhotoViewController - (id)initWithPhoto:(UIImage *)aPhoto { if (self = [super init]) { photo = [aPhoto retain]; // Some 3.0 SDK code here to ensure this view has a full-screen // layout. } return self; } - (void)dealloc { [photo release]; [imageView release]; [super dealloc]; } - (void)loadView { // Set the main view of this UIViewController to be a UIScrollView. UIScrollView *scrollView = [[UIScrollView alloc] init]; [self setView:scrollView]; [scrollView release]; } - (void)viewDidLoad { [super viewDidLoad]; // Initialize the scroll view. CGSize photoSize = [photo size]; UIScrollView *scrollView = (UIScrollView *)[self view]; [scrollView setDelegate:self]; [scrollView setBackgroundColor:[UIColor blackColor]]; // Create the image view. We push the origin to (0, -44) to ensure // that this view displays behind the navigation bar. imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0.0, -44.0, photoSize.width, photoSize.height)]; [imageView setImage:photo]; [scrollView addSubview:imageView]; // Configure zooming. CGSize screenSize = [[UIScreen mainScreen] bounds].size; CGFloat widthRatio = screenSize.width / photoSize.width; CGFloat heightRatio = screenSize.height / photoSize.height; CGFloat initialZoom = (widthRatio > heightRatio) ? heightRatio : widthRatio; [scrollView setMaximumZoomScale:3.0]; [scrollView setMinimumZoomScale:initialZoom]; [scrollView setZoomScale:initialZoom]; [scrollView setBouncesZoom:YES]; [scrollView setContentSize:CGSizeMake(photoSize.width * initialZoom, photoSize.height * initialZoom)]; // Center the photo. Again we push the center point up by 44 pixels // to account for the translucent navigation bar. CGPoint scrollCenter = [scrollView center]; [imageView setCenter:CGPointMake(scrollCenter.x, scrollCenter.y - 44.0)]; } - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; [[[self navigationController] navigationBar] setBarStyle:UIBarStyleBlackTranslucent]; [[UIApplication sharedApplication] setStatusBarStyle:UIStatusBarStyleBlackTranslucent animated:YES]; } - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; [[[self navigationController] navigationBar] setBarStyle:UIBarStyleDefault]; [[UIApplication sharedApplication] setStatusBarStyle:UIStatusBarStyleDefault animated:YES]; } - (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView { return imageView; } @end

    Read the article

  • UIImage rounded corners

    - by catlan
    I try to get rounded corners on a UIImage, what I read so far, the easiest way is to use a mask images. For this I used code from TheElements iPhone Example and some image resize code I found. My problem is that resizedImage is always nil and I don't find the error... - (UIImage *)imageByScalingProportionallyToSize:(CGSize)targetSize { CGSize imageSize = [self size]; float width = imageSize.width; float height = imageSize.height; // scaleFactor will be the fraction that we'll // use to adjust the size. For example, if we shrink // an image by half, scaleFactor will be 0.5. the // scaledWidth and scaledHeight will be the original, // multiplied by the scaleFactor. // // IMPORTANT: the "targetHeight" is the size of the space // we're drawing into. The "scaledHeight" is the height that // the image actually is drawn at, once we take into // account the ideal of maintaining proportions float scaleFactor = 0.0; float scaledWidth = targetSize.width; float scaledHeight = targetSize.height; CGPoint thumbnailPoint = CGPointMake(0,0); // since not all images are square, we want to scale // proportionately. To do this, we find the longest // edge and use that as a guide. if ( CGSizeEqualToSize(imageSize, targetSize) == NO ) { // use the longeset edge as a guide. if the // image is wider than tall, we'll figure out // the scale factor by dividing it by the // intended width. Otherwise, we'll use the // height. float widthFactor = targetSize.width / width; float heightFactor = targetSize.height / height; if ( widthFactor < heightFactor ) scaleFactor = widthFactor; else scaleFactor = heightFactor; // ex: 500 * 0.5 = 250 (newWidth) scaledWidth = width * scaleFactor; scaledHeight = height * scaleFactor; // center the thumbnail in the frame. if // wider than tall, we need to adjust the // vertical drawing point (y axis) if ( widthFactor < heightFactor ) thumbnailPoint.y = (targetSize.height - scaledHeight) * 0.5; else if ( widthFactor > heightFactor ) thumbnailPoint.x = (targetSize.width - scaledWidth) * 0.5; } CGContextRef mainViewContentContext; CGColorSpaceRef colorSpace; colorSpace = CGColorSpaceCreateDeviceRGB(); // create a bitmap graphics context the size of the image mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast); // free the rgb colorspace CGColorSpaceRelease(colorSpace); if (mainViewContentContext==NULL) return NULL; //CGContextSetFillColorWithColor(mainViewContentContext, [[UIColor whiteColor] CGColor]); //CGContextFillRect(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height)); CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage); // Create CGImageRef of the main view bitmap content, and then // release that bitmap context CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext); CGContextRelease(mainViewContentContext); CGImageRef maskImage = [[UIImage imageNamed:@"Mask.png"] CGImage]; CGImageRef resizedImage = CGImageCreateWithMask(mainViewContentBitmapContext, maskImage); CGImageRelease(mainViewContentBitmapContext); // convert the finished resized image to a UIImage UIImage *theImage = [UIImage imageWithCGImage:resizedImage]; // image is retained by the property setting above, so we can // release the original CGImageRelease(resizedImage); // return the image return theImage; }

    Read the article

  • Error With CBitmapContextCreate, CGContextDrawImage, CGBitmapContextCreateImage

    - by wsidell
    Error: CGBitmapContextCreate: invalid data bytes/row: should be at least 400 for 8 integer bits/component, 3 components, kCGImageAlphaNoneSkipFirst. Error: CGContextDrawImage: invalid context Error: CGBitmapContextCreateImage: invalid context Currently, I have in application that runs perfectly in OS 4.0, but I have been trying to get it to work properly in 3.1.3 and I keep getting the errors mentioned above. I am fairly new to iPhone development and am not exactly sure what the problem would be. I am using image resize code that I found in another post on stackoverflow. Here is the code: - (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSizeWithSameAspectRatio:(CGSize)targetSize{ CGSize imageSize = sourceImage.size; CGFloat width = imageSize.width; CGFloat height = imageSize.height; CGFloat targetWidth = targetSize.width; CGFloat targetHeight = targetSize.height; CGFloat scaleFactor = 0.0; CGFloat scaledWidth = targetWidth; CGFloat scaledHeight = targetHeight; CGPoint thumbnailPoint = CGPointMake(0.0,0.0); if (CGSizeEqualToSize(imageSize, targetSize) == NO) { CGFloat widthFactor = targetWidth / width; CGFloat heightFactor = targetHeight / height; if (widthFactor > heightFactor) { scaleFactor = widthFactor; // scale to fit height } else { scaleFactor = heightFactor; // scale to fit width } scaledWidth = width * scaleFactor; scaledHeight = height * scaleFactor; // center the image if (widthFactor > heightFactor) { thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5; } else if (widthFactor < heightFactor) { thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5; } } CGImageRef imageRef = [sourceImage CGImage]; CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef); CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef); if (bitmapInfo == kCGImageAlphaNone) { bitmapInfo = kCGImageAlphaNoneSkipLast; } CGContextRef bitmap; if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) { bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } else { bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } // In the right or left cases, we need to switch scaledWidth and scaledHeight, // and also the thumbnail point if (sourceImage.imageOrientation == UIImageOrientationLeft) { thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x); CGFloat oldScaledWidth = scaledWidth; scaledWidth = scaledHeight; scaledHeight = oldScaledWidth; CGContextRotateCTM (bitmap, radians(90)); CGContextTranslateCTM (bitmap, 0, -targetHeight); } else if (sourceImage.imageOrientation == UIImageOrientationRight) { thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x); CGFloat oldScaledWidth = scaledWidth; scaledWidth = scaledHeight; scaledHeight = oldScaledWidth; CGContextRotateCTM (bitmap, radians(-90)); CGContextTranslateCTM (bitmap, -targetWidth, 0); } else if (sourceImage.imageOrientation == UIImageOrientationUp) { // NOTHING } else if (sourceImage.imageOrientation == UIImageOrientationDown) { CGContextTranslateCTM (bitmap, targetWidth, targetHeight); CGContextRotateCTM (bitmap, radians(-180.)); } CGContextDrawImage(bitmap, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), imageRef); CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; CGContextRelease(bitmap); CGImageRelease(ref); return newImage; Any help would be appreciated. If you need more info, I will gladly post it.

    Read the article

  • Rotate rectangle around center

    - by ESoft
    I am playing with Brad Larsen's adaption of the trackball app. I have two views at a 60 degree angle to each other and was wondering how I get the rotation to be in the center of this (non-closed) rectangle? In the images below I would have liked the rotation to take place all within the blue lines. Code (modified to only rotate around x axis): #import "MyView.h" //===================================================== // Defines //===================================================== #define DEGREES_TO_RADIANS(degrees) \ (degrees * (M_PI / 180.0f)) //===================================================== // Public Interface //===================================================== @implementation MyView - (void)awakeFromNib { transformed = [CALayer layer]; transformed.anchorPoint = CGPointMake(0.5f, 0.5f); transformed.frame = self.bounds; [self.layer addSublayer:transformed]; CALayer *imageLayer = [CALayer layer]; imageLayer.frame = CGRectMake(10.0f, 4.0f, self.bounds.size.width / 2.0f, self.bounds.size.height / 2.0f); imageLayer.transform = CATransform3DMakeRotation(DEGREES_TO_RADIANS(60.0f), 1.0f, 0.0f, 0.0f); imageLayer.contents = (id)[[UIImage imageNamed:@"IMG_0051.png"] CGImage]; imageLayer.borderColor = [UIColor yellowColor].CGColor; imageLayer.borderWidth = 2.0f; [transformed addSublayer:imageLayer]; imageLayer = [CALayer layer]; imageLayer.frame = CGRectMake(10.0f, 120.0f, self.bounds.size.width / 2.0f, self.bounds.size.height / 2.0f); imageLayer.transform = CATransform3DMakeRotation(DEGREES_TO_RADIANS(-60.0f), 1.0f, 0.0f, 0.0f); imageLayer.contents = (id)[[UIImage imageNamed:@"IMG_0089.png"] CGImage]; imageLayer.borderColor = [UIColor greenColor].CGColor; imageLayer.borderWidth = 2.0f; transformed.borderColor = [UIColor whiteColor].CGColor; transformed.borderWidth = 2.0f; [transformed addSublayer:imageLayer]; UIView *line = [[UIView alloc] initWithFrame:CGRectMake(0, self.bounds.size.height / 2.0f, self.bounds.size.width, 2)]; [line setBackgroundColor:[UIColor redColor]]; [self addSubview:line]; line = [[UIView alloc] initWithFrame:CGRectMake(0, self.bounds.size.height * (1.0f / 4.0f), self.bounds.size.width, 2)]; [line setBackgroundColor:[UIColor blueColor]]; [self addSubview:line]; line = [[UIView alloc] initWithFrame:CGRectMake(0, self.bounds.size.height * (3.0f / 4.0f), self.bounds.size.width, 2)]; [line setBackgroundColor:[UIColor blueColor]]; [self addSubview:line]; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { previousLocation = [[touches anyObject] locationInView:self]; } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { CGPoint location = [[touches anyObject] locationInView:self]; //location = CGPointMake(previousLocation.x, location.y); CATransform3D currentTransform = transformed.sublayerTransform; //CGFloat displacementInX = location.x - previousLocation.x; CGFloat displacementInX = previousLocation.x - location.x; CGFloat displacementInY = previousLocation.y - location.y; CGFloat totalRotation = sqrt((displacementInX * displacementInX) + (displacementInY * displacementInY)); CGFloat angle = DEGREES_TO_RADIANS(totalRotation); CGFloat x = ((displacementInX / totalRotation) * currentTransform.m12 + (displacementInY/totalRotation) * currentTransform.m11); CATransform3D rotationalTransform = CATransform3DRotate(currentTransform, angle, x, 0, 0); previousLocation = location; transformed.sublayerTransform = rotationalTransform; } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { } - (void)dealloc { [super dealloc]; } @end

    Read the article

  • UIImage rounded corners

    - by catlan
    I try to get rounded corners on a UIImage, what I read so far, the easiest way is to use a mask images. For this I used code from TheElements iPhone Example and some image resize code I found. My problem is that resizedImage is always nil and I don't find the error... - (UIImage *)imageByScalingProportionallyToSize:(CGSize)targetSize { CGSize imageSize = [self size]; float width = imageSize.width; float height = imageSize.height; // scaleFactor will be the fraction that we'll // use to adjust the size. For example, if we shrink // an image by half, scaleFactor will be 0.5. the // scaledWidth and scaledHeight will be the original, // multiplied by the scaleFactor. // // IMPORTANT: the "targetHeight" is the size of the space // we're drawing into. The "scaledHeight" is the height that // the image actually is drawn at, once we take into // account the ideal of maintaining proportions float scaleFactor = 0.0; float scaledWidth = targetSize.width; float scaledHeight = targetSize.height; CGPoint thumbnailPoint = CGPointMake(0,0); // since not all images are square, we want to scale // proportionately. To do this, we find the longest // edge and use that as a guide. if ( CGSizeEqualToSize(imageSize, targetSize) == NO ) { // use the longeset edge as a guide. if the // image is wider than tall, we'll figure out // the scale factor by dividing it by the // intended width. Otherwise, we'll use the // height. float widthFactor = targetSize.width / width; float heightFactor = targetSize.height / height; if ( widthFactor < heightFactor ) scaleFactor = widthFactor; else scaleFactor = heightFactor; // ex: 500 * 0.5 = 250 (newWidth) scaledWidth = width * scaleFactor; scaledHeight = height * scaleFactor; // center the thumbnail in the frame. if // wider than tall, we need to adjust the // vertical drawing point (y axis) if ( widthFactor < heightFactor ) thumbnailPoint.y = (targetSize.height - scaledHeight) * 0.5; else if ( widthFactor > heightFactor ) thumbnailPoint.x = (targetSize.width - scaledWidth) * 0.5; } CGContextRef mainViewContentContext; CGColorSpaceRef colorSpace; colorSpace = CGColorSpaceCreateDeviceRGB(); // create a bitmap graphics context the size of the image mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast); // free the rgb colorspace CGColorSpaceRelease(colorSpace); if (mainViewContentContext==NULL) return NULL; //CGContextSetFillColorWithColor(mainViewContentContext, [[UIColor whiteColor] CGColor]); //CGContextFillRect(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height)); CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage); // Create CGImageRef of the main view bitmap content, and then // release that bitmap context CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext); CGContextRelease(mainViewContentContext); CGImageRef maskImage = [[UIImage imageNamed:@"Mask.png"] CGImage]; CGImageRef resizedImage = CGImageCreateWithMask(mainViewContentBitmapContext, maskImage); CGImageRelease(mainViewContentBitmapContext); // convert the finished resized image to a UIImage UIImage *theImage = [UIImage imageWithCGImage:resizedImage]; // image is retained by the property setting above, so we can // release the original CGImageRelease(resizedImage); // return the image return theImage; }

    Read the article

  • Crash when trying to get NSManagedObject from NSFetchedResultsController after 25 objects?

    - by Jeremy
    Hey everyone, I'm relatively new to Core Data on iOS, but I think I've been getting better with it. I've been experiencing a bizarre crash, however, in one of my applications and have not been able to figure it out. I have approximately 40 objects in Core Data, presented in a UITableView. When tapping on a cell, a UIActionSheet appears, presenting the user with a UIActionSheet with options related to the cell that was selected. So that I can reference the selected object, I declare an NSIndexPath in my header called "lastSelection" and do the following when the UIActionSheet is presented: // Each cell has a tag based on its row number (i.e. first row has tag 0) lastSelection = [NSIndexPath indexPathForRow:[sender tag] inSection:0]; NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:lastSelection]; BOOL onDuty = [[managedObject valueForKey:@"onDuty"] boolValue]; UIActionSheet *actionSheet = [[UIActionSheet alloc] initWithTitle:@"Status" delegate:self cancelButtonTitle:nil destructiveButtonTitle:nil otherButtonTitles:nil]; if(onDuty) { [actionSheet addButtonWithTitle:@"Off Duty"]; } else { [actionSheet addButtonWithTitle:@"On Duty"]; } actionSheet.actionSheetStyle = UIActionSheetStyleBlackOpaque; // Override the typical UIActionSheet behavior by presenting it overlapping the sender's frame. This makes it more clear which cell is selected. CGRect senderFrame = [sender frame]; CGPoint point = CGPointMake(senderFrame.origin.x + (senderFrame.size.width / 2), senderFrame.origin.y + (senderFrame.size.height / 2)); CGRect popoverRect = CGRectMake(point.x, point.y, 1, 1); [actionSheet showFromRect:popoverRect inView:[sender superview] animated:NO]; [actionSheet release]; When the UIActionSheet is dismissed with a button, the following code is called: - (void)actionSheet:(UIActionSheet *)actionSheet willDismissWithButtonIndex:(NSInteger)buttonIndex { // Set status based on UIActionSheet button pressed if(buttonIndex == -1) { return; } NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:lastSelection]; if([actionSheet.title isEqualToString:@"Status"]) { if([[actionSheet buttonTitleAtIndex:buttonIndex] isEqualToString:@"On Duty"]) { [managedObject setValue:[NSNumber numberWithBool:YES] forKey:@"onDuty"]; [managedObject setValue:@"onDuty" forKey:@"status"]; } else { [managedObject setValue:[NSNumber numberWithBool:NO] forKey:@"onDuty"]; [managedObject setValue:@"offDuty" forKey:@"status"]; } } NSError *error; [self.managedObjectContext save:&error]; [tableView reloadData]; } This might not be the most efficient code (sorry, I'm new!), but it does work. That is, for the first 25 items in the list. Selecting the 26th item or beyond, the UIActionSheet will appear, but if it is dismissed with a button, I get a variety of errors, including any one of the following: [__NSCFArray section]: unrecognized selector sent to instance 0x4c6bf90 Program received signal: “EXC_BAD_ACCESS” [_NSObjectID_48_0 section]: unrecognized selector sent to instance 0x4c54710 [__NSArrayM section]: unrecognized selector sent to instance 0x4c619a0 [NSComparisonPredicate section]: unrecognized selector sent to instance 0x6088790 [NSKeyPathExpression section]: unrecognized selector sent to instance 0x4c18950 If I comment out NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:lastSelection]; it doesn't crash anymore, so I believe it has something do do with that. Can anyone offer any insight? Please let me know if I need to include any other information. Thanks! EDIT: Interestingly, my fetchedResultsController code returns a different object every time. Is this expected, or could this be a cause of my issue? The code looks like this: - (NSFetchedResultsController *)fetchedResultsController { /* Set up the fetched results controller. */ // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. NSEntityDescription *entity = [NSEntityDescription entityForName:@"Employee" inManagedObjectContext:self.managedObjectContext]; [fetchRequest setEntity:entity]; // Set the batch size to a suitable number. [fetchRequest setFetchBatchSize:80]; // Edit the sort key as appropriate. NSString *sortKey; BOOL ascending; if(sortControl.selectedSegmentIndex == 0) { sortKey = @"startTime"; ascending = YES; } else if(sortControl.selectedSegmentIndex == 1) { sortKey = @"name"; ascending = YES; } else { sortKey = @"onDuty"; ascending = NO; } NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:sortKey ascending:ascending]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; // Edit the section name key path and cache name if appropriate. NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:self.managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; NSError *error = nil; if (![fetchedResultsController_ performFetch:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. */ //NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return fetchedResultsController_; } This happens when I set a breakpoint: (gdb) po [self fetchedResultsController] <NSFetchedResultsController: 0x61567c0> (gdb) po [self fetchedResultsController] <NSFetchedResultsController: 0x4c83630>

    Read the article

  • keyDown works but i get beeps

    - by Oscar
    I just got my keydown method to work. But i get system beep everytime i press key. i have no idea whats wrong. Googled for hours and all people say is that if you have your keyDown method you should also implement the acceptsFirstResponder. did that to and it still doesn't work. #import <Cocoa/Cocoa.h> #import "PaddleView.h" #import "BallView.h" @interface GameController : NSView { PaddleView *leftPaddle; PaddleView *rightPaddle; BallView * ball; CGPoint ballVelocity; int gameState; int player1Score; int player2Score; } @property (retain) IBOutlet PaddleView *leftPaddle; @property (retain) IBOutlet PaddleView *rightPaddle; @property (retain) IBOutlet BallView *ball; - (void)reset:(BOOL)newGame; @end #import "GameController.h" #define GameStateRunning 1 #define GameStatePause 2 #define BallSpeedX 0.2 #define BallSpeedY 0.3 #define CompMoveSpeed 15 #define ScoreToWin 5 @implementation GameController @synthesize leftPaddle, rightPaddle, ball; - (id)initWithCoder:(NSCoder *)aDecoder { self = [super initWithCoder:aDecoder]; if(self) { gameState = GameStatePause; ballVelocity = CGPointMake(BallSpeedX, BallSpeedY); [NSTimer scheduledTimerWithTimeInterval:0.001 target:self selector:@selector(gameLoop) userInfo:nil repeats:YES]; } return self; } - (void)gameLoop { if(gameState == GameStateRunning) { [ball setFrameOrigin:CGPointMake(ball.frame.origin.x + ballVelocity.x, ball.frame.origin.y + ballVelocity.y)]; if(ball.frame.origin.x + 15 > self.frame.size.width || ball.frame.origin.x < 0) { ballVelocity.x =- ballVelocity.x; } if(ball.frame.origin.y + 35 > self.frame.size.height || ball.frame.origin.y < 0) { ballVelocity.y =- ballVelocity.y; } } if(CGRectIntersectsRect(ball.frame, leftPaddle.frame)) { if(ball.frame.origin.x > leftPaddle.frame.origin.x) { ballVelocity.x =- ballVelocity.x; } } if(CGRectIntersectsRect(ball.frame, rightPaddle.frame)) { if(ball.frame.origin.x +15 > rightPaddle.frame.origin.x) { ballVelocity.x =- ballVelocity.x; } } if(ball.frame.origin.x <= self.frame.size.width / 2) { if(ball.frame.origin.y < leftPaddle.frame.origin.y + 75 && leftPaddle.frame.origin.y > 0) { [leftPaddle setFrameOrigin:CGPointMake(leftPaddle.frame.origin.x, leftPaddle.frame.origin.y - CompMoveSpeed)]; } if(ball.frame.origin.y > leftPaddle.frame.origin.y +75 && leftPaddle.frame.origin.y < 700 - leftPaddle.frame.size.height ) { [leftPaddle setFrameOrigin:CGPointMake(leftPaddle.frame.origin.x, leftPaddle.frame.origin.y + CompMoveSpeed)]; } } if(ball.frame.origin.x <= 0) { player2Score++; [self reset:(player2Score >= ScoreToWin)]; } if(ball.frame.origin.x + 15 > self.frame.size.width) { player1Score++; [self reset:(player1Score >= ScoreToWin)]; } } - (void)reset:(BOOL)newGame { gameState = GameStatePause; [ball setFrameOrigin:CGPointMake((self.frame.size.width + 7.5) / 2, (self.frame.size.height + 7.5)/2)]; if(newGame) { if(player1Score > player2Score) { NSLog(@"Player 1 Wins!"); } else { NSLog(@"Player 2 Wins!"); } player1Score = 0; player2Score = 0; } else { NSLog(@"Press key to serve"); } NSLog(@"Player 1: %d",player1Score); NSLog(@"Player 2: %d",player2Score); } - (void)moveRightPaddleUp { if(rightPaddle.frame.origin.y < 700 - rightPaddle.frame.size.height) { [rightPaddle setFrameOrigin:CGPointMake(rightPaddle.frame.origin.x, rightPaddle.frame.origin.y + 20)]; } } - (void)moveRightPaddleDown { if(rightPaddle.frame.origin.y > 0) { [rightPaddle setFrameOrigin:CGPointMake(rightPaddle.frame.origin.x, rightPaddle.frame.origin.y - 20)]; } } - (BOOL)acceptsFirstResponder { return YES; } - (void)keyDown:(NSEvent *)theEvent { if ([theEvent modifierFlags] & NSNumericPadKeyMask) { NSString *theArrow = [theEvent charactersIgnoringModifiers]; unichar keyChar = 0; if ( [theArrow length] == 0 ) { return; // reject dead keys } if ( [theArrow length] == 1 ) { keyChar = [theArrow characterAtIndex:0]; if ( keyChar == NSLeftArrowFunctionKey ) { gameState = GameStateRunning; } if ( keyChar == NSRightArrowFunctionKey ) { } if ( keyChar == NSUpArrowFunctionKey ) { [self moveRightPaddleUp]; } if ( keyChar == NSDownArrowFunctionKey ) { [self moveRightPaddleDown]; } [super keyDown:theEvent]; } } else { [super keyDown:theEvent]; } } - (void)drawRect:(NSRect)dirtyRect { } - (void)dealloc { [ball release]; [rightPaddle release]; [leftPaddle release]; [super dealloc]; } @end

    Read the article

  • UIImagePickerController, UIImage, Memory and More!

    - by Itay
    I've noticed that there are many questions about how to handle UIImage objects, especially in conjunction with UIImagePickerController and then displaying it in a view (usually a UIImageView). Here is a collection of common questions and their answers. Feel free to edit and add your own. I obviously learnt all this information from somewhere too. Various forum posts, StackOverflow answers and my own experimenting brought me to all these solutions. Credit goes to those who posted some sample code that I've since used and modified. I don't remember who you all are - but hats off to you! How Do I Select An Image From the User's Images or From the Camera? You use UIImagePickerController. The documentation for the class gives a decent overview of how one would use it, and can be found here. Basically, you create an instance of the class, which is a modal view controller, display it, and set yourself (or some class) to be the delegate. Then you'll get notified when a user selects some form of media (movie or image in 3.0 on the 3GS), and you can do whatever you want. My Delegate Was Called - How Do I Get The Media? The delegate method signature is the following: - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info; You should put a breakpoint in the debugger to see what's in the dictionary, but you use that to extract the media. For example: UIImage* image = [info objectForKey:UIImagePickerControllerOriginalImage]; There are other keys that work as well, all in the documentation. OK, I Got The Image, But It Doesn't Have Any Geolocation Data. What gives? Unfortunately, Apple decided that we're not worthy of this information. When they load the data into the UIImage, they strip it of all the EXIF/Geolocation data. Can I Get To The Original File Representing This Image on the Disk? Nope. For security purposes, you only get the UIImage. How Can I Look At The Underlying Pixels of the UIImage? Since the UIImage is immutable, you can't look at the direct pixels. However, you can make a copy. The code to this looks something like this: UIImage* image = ...; // An image NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage)); unsigned char* pixelBytes = (unsigned char *)[pixelData bytes]; // Take away the red pixel, assuming 32-bit RGBA for(int i = 0; i < [pixelData length]; i += 4) { pixelBytes[i] = 0; // red pixelBytes[i+1] = pixelBytes[i+1]; // green pixelBytes[i+2] = pixelBytes[i+2]; // blue pixelBytes[i+3] = pixelBytes[i+3]; // alpha } However, note that CGDataProviderCopyData provides you with an "immutable" reference to the data - meaning you can't change it (and you may get a BAD_ACCESS error if you do). Look at the next question if you want to see how you can modify the pixels. How Do I Modify The Pixels of the UIImage? The UIImage is immutable, meaning you can't change it. Apple posted a great article on how to get a copy of the pixels and modify them, and rather than copy and paste it here, you should just go read the article. Once you have the bitmap context as they mention in the article, you can do something similar to this to get a new UIImage with the modified pixels: CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; Do remember to release your references though, otherwise you're going to be leaking quite a bit of memory. After I Select 3 Images From The Camera, I Run Out Of Memory. Help! You have to remember that even though on disk these images take up only a few hundred kilobytes at most, that's because they're compressed as a PNG or JPG. When they are loaded into the UIImage, they become uncompressed. A quick over-the-envelope calculation would be: width x height x 4 = bytes in memory That's assuming 32-bit pixels. If you have 16-bit pixels (some JPGs are stored as RGBA-5551), then you'd replace the 4 with a 2. Now, images taken with the camera are 1600 x 1200 pixels, so let's do the math: 1600 x 1200 x 4 = 7,680,000 bytes = ~8 MB 8 MB is a lot, especially when you have a limit of around 24 MB for your application. That's why you run out of memory. OK, I Understand Why I Have No Memory. What Do I Do? There is never any reason to display images at their full resolution. The iPhone has a screen of 480 x 320 pixels, so you're just wasting space. If you find yourself in this situation, ask yourself the following question: Do I need the full resolution image? If the answer is yes, then you should save it to disk for later use. If the answer is no, then read the next part. Once you've decided what to do with the full-resolution image, then you need to create a smaller image to use for displaying. Many times you might even want several sizes for your image: a thumbnail, a full-size one for displaying, and the original full-resolution image. OK, I'm Hooked. How Do I Resize the Image? Unfortunately, there is no defined way how to resize an image. Also, it's important to note that when you resize it, you'll get a new image - you're not modifying the old one. There are a couple of methods to do the resizing. I'll present them both here, and explain the pros and cons of each. Method 1: Using UIKit + (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize; { // Create a graphics image context UIGraphicsBeginImageContext(newSize); // Tell the old image to draw in this new context, with the desired // new size [image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)]; // Get the new image from the context UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext(); // End the context UIGraphicsEndImageContext(); // Return the new image. return newImage; } This method is very simple, and works great. It will also deal with the UIImageOrientation for you, meaning that you don't have to care whether the camera was sideways when the picture was taken. However, this method is not thread safe, and since thumbnailing is a relatively expensive operation (approximately ~2.5s on a 3G for a 1600 x 1200 pixel image), this is very much an operation you may want to do in the background, on a separate thread. Method 2: Using CoreGraphics + (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize; { CGFloat targetWidth = targetSize.width; CGFloat targetHeight = targetSize.height; CGImageRef imageRef = [sourceImage CGImage]; CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef); CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef); if (bitmapInfo == kCGImageAlphaNone) { bitmapInfo = kCGImageAlphaNoneSkipLast; } CGContextRef bitmap; if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) { bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } else { bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } if (sourceImage.imageOrientation == UIImageOrientationLeft) { CGContextRotateCTM (bitmap, radians(90)); CGContextTranslateCTM (bitmap, 0, -targetHeight); } else if (sourceImage.imageOrientation == UIImageOrientationRight) { CGContextRotateCTM (bitmap, radians(-90)); CGContextTranslateCTM (bitmap, -targetWidth, 0); } else if (sourceImage.imageOrientation == UIImageOrientationUp) { // NOTHING } else if (sourceImage.imageOrientation == UIImageOrientationDown) { CGContextTranslateCTM (bitmap, targetWidth, targetHeight); CGContextRotateCTM (bitmap, radians(-180.)); } CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef); CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; CGContextRelease(bitmap); CGImageRelease(ref); return newImage; } The benefit of this method is that it is thread-safe, plus it takes care of all the small things (using correct color space and bitmap info, dealing with image orientation) that the UIKit version does. How Do I Resize and Maintain Aspect Ratio (like the AspectFill option)? It is very similar to the method above, and it looks like this: + (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSizeWithSameAspectRatio:(CGSize)targetSize; { CGSize imageSize = sourceImage.size; CGFloat width = imageSize.width; CGFloat height = imageSize.height; CGFloat targetWidth = targetSize.width; CGFloat targetHeight = targetSize.height; CGFloat scaleFactor = 0.0; CGFloat scaledWidth = targetWidth; CGFloat scaledHeight = targetHeight; CGPoint thumbnailPoint = CGPointMake(0.0,0.0); if (CGSizeEqualToSize(imageSize, targetSize) == NO) { CGFloat widthFactor = targetWidth / width; CGFloat heightFactor = targetHeight / height; if (widthFactor > heightFactor) { scaleFactor = widthFactor; // scale to fit height } else { scaleFactor = heightFactor; // scale to fit width } scaledWidth = width * scaleFactor; scaledHeight = height * scaleFactor; // center the image if (widthFactor > heightFactor) { thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5; } else if (widthFactor < heightFactor) { thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5; } } CGImageRef imageRef = [sourceImage CGImage]; CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef); CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef); if (bitmapInfo == kCGImageAlphaNone) { bitmapInfo = kCGImageAlphaNoneSkipLast; } CGContextRef bitmap; if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) { bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } else { bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } // In the right or left cases, we need to switch scaledWidth and scaledHeight, // and also the thumbnail point if (sourceImage.imageOrientation == UIImageOrientationLeft) { thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x); CGFloat oldScaledWidth = scaledWidth; scaledWidth = scaledHeight; scaledHeight = oldScaledWidth; CGContextRotateCTM (bitmap, radians(90)); CGContextTranslateCTM (bitmap, 0, -targetHeight); } else if (sourceImage.imageOrientation == UIImageOrientationRight) { thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x); CGFloat oldScaledWidth = scaledWidth; scaledWidth = scaledHeight; scaledHeight = oldScaledWidth; CGContextRotateCTM (bitmap, radians(-90)); CGContextTranslateCTM (bitmap, -targetWidth, 0); } else if (sourceImage.imageOrientation == UIImageOrientationUp) { // NOTHING } else if (sourceImage.imageOrientation == UIImageOrientationDown) { CGContextTranslateCTM (bitmap, targetWidth, targetHeight); CGContextRotateCTM (bitmap, radians(-180.)); } CGContextDrawImage(bitmap, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), imageRef); CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; CGContextRelease(bitmap); CGImageRelease(ref); return newImage; } The method we employ here is to create a bitmap with the desired size, but draw an image that is actually larger, thus maintaining the aspect ratio. So We've Got Our Scaled Images - How Do I Save Them To Disk? This is pretty simple. Remember that we want to save a compressed version to disk, and not the uncompressed pixels. Apple provides two functions that help us with this (documentation is here): NSData* UIImagePNGRepresentation(UIImage *image); NSData* UIImageJPEGRepresentation (UIImage *image, CGFloat compressionQuality); And if you want to use them, you'd do something like: UIImage* myThumbnail = ...; // Get some image NSData* imageData = UIImagePNGRepresentation(myThumbnail); Now we're ready to save it to disk, which is the final step (say into the documents directory): // Give a name to the file NSString* imageName = @"MyImage.png"; // Now, we have to find the documents directory so we can save it // Note that you might want to save it elsewhere, like the cache directory, // or something similar. NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString* documentsDirectory = [paths objectAtIndex:0]; // Now we get the full path to the file NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imageName]; // and then we write it out [imageData writeToFile:fullPathToFile atomically:NO]; You would repeat this for every version of the image you have. How Do I Load These Images Back Into Memory? Just look at the various UIImage initialization methods, such as +imageWithContentsOfFile: in the Apple documentation.

    Read the article

< Previous Page | 5 6 7 8 9