在AVCaptureVideoPreviewLayer中精确剪裁捕获的图像

我有一个使用AV基础的照片应用程序。 我使用AVCaptureVideoPreviewLayer设置了一个预览图层,占用屏幕的上半部分。 所以当用户试图拍照时,他们只能看到屏幕的上半部分。

这样做效果很好,但是当用户真正拍摄照片时,我尝试将照片设置为图层内容,图像失真。 我做了研究,并意识到我需要裁剪图像。

我想要做的就是裁剪完整的图像,以便所有剩下的东西正是用户最初在屏幕上半部分看到的。

我已经能够做到这一点,但我通过input手动CGRect值来做到这一点,它仍然看起来不完美。 必须有一个更简单的方法来做到这一点。

在过去的两天里,我已经翻阅了关于裁剪图像的每一篇文章,并没有任何工作。

必须以编程方式裁剪捕获的图像,以便最终图像与预览图层中最初看到的图像完全一致。

这是我的viewDidLoad实现:

- (void)viewDidLoad { [super viewDidLoad]; AVCaptureSession *session =[[AVCaptureSession alloc]init]; [session setSessionPreset:AVCaptureSessionPresetPhoto]; AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; NSError *error = [[NSError alloc]init]; AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error]; if([session canAddInput:deviceInput]) [session addInput:deviceInput]; CALayer *rootLayer = [[self view]layer]; [rootLayer setMasksToBounds:YES]; _previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:session]; [_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)]; [_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill]; [rootLayer insertSublayer:_previewLayer atIndex:0]; _stillImageOutput = [[AVCaptureStillImageOutput alloc] init]; [session addOutput:_stillImageOutput]; [session startRunning]; } 

这里是用户按下button捕捉照片时运行的代码:

 -(IBAction)stillImageCapture { AVCaptureConnection *videoConnection = nil; for (AVCaptureConnection *connection in _stillImageOutput.connections){ for (AVCaptureInputPort *port in [connection inputPorts]){ if ([[port mediaType] isEqual:AVMediaTypeVideo]){ videoConnection = connection; break; } } if (videoConnection) { break; } } NSLog(@"about to request a capture from: %@", _stillImageOutput); [_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) { if(imageDataSampleBuffer) { NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer]; UIImage *image = [[UIImage alloc]initWithData:imageData]; CALayer *subLayer = [CALayer layer]; subLayer.frame = _previewLayer.frame; image = [self rotate:image andOrientation:image.imageOrientation]; //Below is the crop that is sort of working for me, but as you can see I am manually entering in values and just guessing and it still does not look perfect. CGRect cropRect = CGRectMake(0, 650, 3000, 2000); CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect); subLayer.contents = (id)[UIImage imageWithCGImage:imageRef].CGImage; subLayer.frame = _previewLayer.frame; [_previewLayer addSublayer:subLayer]; } }]; } 

看看AVCaptureVideoPreviewLayer

 -(CGRect)metadataOutputRectOfInterestForRect:(CGRect)layerRect 

这种方法可以让您轻松地将图层的可见CGRect转换为实际的摄像机输出。

一个警告:物理相机没有“正面向上”安装,而是顺时针旋转90度。 (所以,如果你拿着你的iPhone – 主页button权利,相机实际上正面朝上)。

牢记这一点,你必须转换CGRect上面的方法给你,裁剪图像到屏幕上。

例:

 CGRect visibleLayerFrame = THE ACTUAL VISIBLE AREA IN THE LAYER FRAME CGRect metaRect = [self.previewView.layer metadataOutputRectOfInterestForRect:visibleLayerFrame]; CGSize originalSize = [originalImage size]; if (UIInterfaceOrientationIsPortrait(_snapInterfaceOrientation)) { // For portrait images, swap the size of the image, because // here the output image is actually rotated relative to what you see on screen. CGFloat temp = originalSize.width; originalSize.width = originalSize.height; originalSize.height = temp; } // metaRect is fractional, that's why we multiply here CGRect cropRect; cropRect.origin.x = metaRect.origin.x * originalSize.width; cropRect.origin.y = metaRect.origin.y * originalSize.height; cropRect.size.width = metaRect.size.width * originalSize.width; cropRect.size.height = metaRect.size.height * originalSize.height; cropRect = CGRectIntegral(cropRect); 

这可能有点混乱,但是我真正理解的是这样的:

保持你的设备“主页button正确” – >你会看到x轴实际上位于你的iPhone的“高度”,而y轴位于你的iPhone的“宽度”。 这就是为什么肖像图像,你必须交换的大小;)

@Cabus有一个可行的解决scheme,你应该对他的答案进行投票。 不过,我在Swift中做了我自己的版本,具体如下:

 // The image returned in initialImageData will be larger than what // is shown in the AVCaptureVideoPreviewLayer, so we need to crop it. let image : UIImage = UIImage(data: initialImageData)! let originalSize : CGSize let visibleLayerFrame = self.previewView!.bounds // THE ACTUAL VISIBLE AREA IN THE LAYER FRAME // Calculate the fractional size that is shown in the preview let metaRect : CGRect = (self.videoPreviewLayer?.metadataOutputRectOfInterestForRect(visibleLayerFrame))! if (image.imageOrientation == UIImageOrientation.Left || image.imageOrientation == UIImageOrientation.Right) { // For these images (which are portrait), swap the size of the // image, because here the output image is actually rotated // relative to what you see on screen. originalSize = CGSize(width: image.size.height, height: image.size.width) } else { originalSize = image.size } // metaRect is fractional, that's why we multiply here. let cropRect : CGRect = CGRectIntegral( CGRect( x: metaRect.origin.x * originalSize.width, y: metaRect.origin.y * originalSize.height, width: metaRect.size.width * originalSize.width, height: metaRect.size.height * originalSize.height)) let finalImage : UIImage = UIImage(CGImage: CGImageCreateWithImageInRect(image.CGImage, cropRect)!, scale:1, orientation: image.imageOrientation ) 

这里是@Erik Allen在Swift 3中的回答:

 let originalSize: CGSize let visibleLayerFrame = self?.photoView.bounds // Calculate the fractional size that is shown in the preview let metaRect = (self?.videoPreviewLayer?.metadataOutputRectOfInterest(for: visibleLayerFrame ?? CGRect.zero)) ?? CGRect.zero if (image.imageOrientation == UIImageOrientation.left || image.imageOrientation == UIImageOrientation.right) { // For these images (which are portrait), swap the size of the // image, because here the output image is actually rotated // relative to what you see on screen. originalSize = CGSize(width: image.size.height, height: image.size.width) } else { originalSize = image.size } let cropRect: CGRect = CGRect(x: metaRect.origin.x * originalSize.width, y: metaRect.origin.y * originalSize.height, width: metaRect.size.width * originalSize.width, height: metaRect.size.height * originalSize.height).integral if let finalCgImage = image.cgImage?.cropping(to: cropRect) { let finalImage = UIImage(cgImage: finalCgImage, scale: 1.0, orientation: image.imageOrientation) // User your image... }