How to directly rotate CVImageBuffer image in IOS 4 without converting to UIImage?
- by Ian Charnas
I am using OpenCV 2.2 on the iPhone to detect faces. I'm using the IOS 4's AVCaptureSession to get access to the camera stream, as seen in the code that follows.
My challenge is that the video frames come in as CVBufferRef (pointers to CVImageBuffer) objects, and they come in oriented as a landscape, 480px wide by 300px high. This is fine if you are holding the phone sideways, but when the phone is held in the upright position I want to rotate these frames 90 degrees clockwise so that OpenCV can find the faces correctly.
I could convert the CVBufferRef to a CGImage, then to a UIImage, and then rotate, as this person is doing: Rotate CGImage taken from video frame
However that wastes a lot of CPU. I'm looking for a faster way to rotate the images coming in, ideally using the GPU to do this processing if possible.
Any ideas?
Ian
Code Sample:
-(void) startCameraCapture {
// Start up the face detector
faceDetector = [[FaceDetector alloc] initWithCascade:@"haarcascade_frontalface_alt2" withFileExtension:@"xml"];
// Create the AVCapture Session
session = [[AVCaptureSession alloc] init];
// create a preview layer to show the output from the camera
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
previewLayer.frame = previewView.frame;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[previewView.layer addSublayer:previewLayer];
// Get the default camera device
AVCaptureDevice* camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a AVCaptureInput with the camera device
NSError *error=nil;
AVCaptureInput* cameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:camera error:&error];
if (cameraInput == nil) {
NSLog(@"Error to create camera capture:%@",error);
}
// Set the output
AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init];
videoOutput.alwaysDiscardsLateVideoFrames = YES;
// create a queue besides the main thread queue to run the capture on
dispatch_queue_t captureQueue = dispatch_queue_create("catpureQueue", NULL);
// setup our delegate
[videoOutput setSampleBufferDelegate:self queue:captureQueue];
// release the queue. I still don't entirely understand why we're releasing it here,
// but the code examples I've found indicate this is the right thing. Hmm...
dispatch_release(captureQueue);
// configure the pixel format
videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA],
(id)kCVPixelBufferPixelFormatTypeKey,
nil];
// and the size of the frames we want
// try AVCaptureSessionPresetLow if this is too slow...
[session setSessionPreset:AVCaptureSessionPresetMedium];
// If you wish to cap the frame rate to a known value, such as 10 fps, set
// minFrameDuration.
videoOutput.minFrameDuration = CMTimeMake(1, 10);
// Add the input and output
[session addInput:cameraInput];
[session addOutput:videoOutput];
// Start the session
[session startRunning];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// only run if we're not already processing an image
if (!faceDetector.imageNeedsProcessing) {
// Get CVImage from sample buffer
CVImageBufferRef cvImage = CMSampleBufferGetImageBuffer(sampleBuffer);
// Send the CVImage to the FaceDetector for later processing
[faceDetector setImageFromCVPixelBufferRef:cvImage];
// Trigger the image processing on the main thread
[self performSelectorOnMainThread:@selector(processImage) withObject:nil waitUntilDone:NO];
}
}