使用AVFoundation裁剪AVAssetvideo

我正在使用AVCaptureMovieFileOutput来录制一些video。 我使用AVLayerVideoGravityResizeAspectFill稍微放大显示预览图层。 我的问题是,最终的video更大,包含额外的图像,不适合在预览屏幕上。

这是预览和结果video

在这里输入图像说明在这里输入图像说明

有没有一种方法可以使用CGRect来指定我想从video剪切的AVAssetExportSession

编辑—-

当我将一个CGAffineTransformScale应用到AVAssetTrack它将放大video,并将AVMutableVideoComposition renderSize设置为view.bounds 。 太棒了,还剩下一个问题。 video的宽度不会伸展到正确的宽度,只会填充黑色。

编辑2 —-build议的问题/答案是不完整的..

我的一些代码:

在我的- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error方法我有这种裁剪和调整video。

 - (void)flipAndSave:(NSURL *)videoURL withCompletionBlock:(void(^)(NSURL *returnURL))completionBlock { AVURLAsset *firstAsset = [AVURLAsset assetWithURL:videoURL]; // 1 - Create AVMutableComposition object. This object will hold your AVMutableCompositionTrack instances. AVMutableComposition *mixComposition = [[AVMutableComposition alloc] init]; // 2 - Video track AVMutableCompositionTrack *firstTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid]; [firstTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstAsset.duration) ofTrack:[[firstAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:kCMTimeZero error:nil]; // 2.1 - Create AVMutableVideoCompositionInstruction AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; mainInstruction.timeRange = CMTimeRangeMake(CMTimeMakeWithSeconds(0, 600), firstAsset.duration); // 2.2 - Create an AVMutableVideoCompositionLayerInstruction for the first track AVMutableVideoCompositionLayerInstruction *firstlayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:firstTrack]; AVAssetTrack *firstAssetTrack = [[firstAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]; UIImageOrientation firstAssetOrientation_ = UIImageOrientationUp; BOOL isFirstAssetPortrait_ = NO; CGAffineTransform firstTransform = firstAssetTrack.preferredTransform; if (firstTransform.a == 0 && firstTransform.b == 1.0 && firstTransform.c == -1.0 && firstTransform.d == 0) { firstAssetOrientation_ = UIImageOrientationRight; isFirstAssetPortrait_ = YES; } if (firstTransform.a == 0 && firstTransform.b == -1.0 && firstTransform.c == 1.0 && firstTransform.d == 0) { firstAssetOrientation_ = UIImageOrientationLeft; isFirstAssetPortrait_ = YES; } if (firstTransform.a == 1.0 && firstTransform.b == 0 && firstTransform.c == 0 && firstTransform.d == 1.0) { firstAssetOrientation_ = UIImageOrientationUp; } if (firstTransform.a == -1.0 && firstTransform.b == 0 && firstTransform.c == 0 && firstTransform.d == -1.0) { firstAssetOrientation_ = UIImageOrientationDown; } // [firstlayerInstruction setTransform:firstAssetTrack.preferredTransform atTime:kCMTimeZero]; // [firstlayerInstruction setCropRectangle:self.view.bounds atTime:kCMTimeZero]; CGFloat scale = [self getScaleFromAsset:firstAssetTrack]; firstTransform = CGAffineTransformScale(firstTransform, scale, scale); [firstlayerInstruction setTransform:firstTransform atTime:kCMTimeZero]; // 2.4 - Add instructions mainInstruction.layerInstructions = [NSArray arrayWithObjects:firstlayerInstruction,nil]; AVMutableVideoComposition *mainCompositionInst = [AVMutableVideoComposition videoComposition]; mainCompositionInst.instructions = [NSArray arrayWithObject:mainInstruction]; mainCompositionInst.frameDuration = CMTimeMake(1, 30); // CGSize videoSize = firstAssetTrack.naturalSize; CGSize videoSize = self.view.bounds.size; BOOL isPortrait_ = [self isVideoPortrait:firstAsset]; if(isPortrait_) { videoSize = CGSizeMake(videoSize.height, videoSize.width); } NSLog(@"%@", NSStringFromCGSize(videoSize)); mainCompositionInst.renderSize = videoSize; // 3 - Audio track AVMutableCompositionTrack *AudioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid]; [AudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstAsset.duration) ofTrack:[[firstAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:nil]; // 4 - Get path NSString *outputPath = [[NSString alloc] initWithFormat:@"%@%@", NSTemporaryDirectory(), @"cutoutput.mov"]; NSURL *outputURL = [[NSURL alloc] initFileURLWithPath:outputPath]; NSFileManager *manager = [[NSFileManager alloc] init]; if ([manager fileExistsAtPath:outputPath]) { [manager removeItemAtPath:outputPath error:nil]; } // 5 - Create exporter AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetHighestQuality]; exporter.outputURL=outputURL; exporter.outputFileType = AVFileTypeQuickTimeMovie; exporter.shouldOptimizeForNetworkUse = YES; exporter.videoComposition = mainCompositionInst; [exporter exportAsynchronouslyWithCompletionHandler:^{ switch ([exporter status]) { case AVAssetExportSessionStatusFailed: NSLog(@"Export failed: %@ : %@", [[exporter error] localizedDescription], [exporter error]); completionBlock(nil); break; case AVAssetExportSessionStatusCancelled: NSLog(@"Export canceled"); completionBlock(nil); break; default: { NSURL *outputURL = exporter.outputURL; dispatch_async(dispatch_get_main_queue(), ^{ completionBlock(outputURL); }); break; } } }]; } 

以下是我对您的问题的解释:您正在屏幕比例为4:3的设备上捕捉video,因此您的AVCaptureVideoPreviewLayer为4:3,但videoinput设备以16:9捕捉video,因此生成的video“较大“比预览中看到的要多。

如果您只是想要裁剪未被预览捕获的额外像素,请查看此http://www.netwalk.be/article/record-square-video-ios 。 本文介绍如何将video裁剪为正方形。 但是,您只需要进行一些修改即可裁剪为4:3。 我已经去testing了,下面是我做的改变:

一旦你有AVAssetTrack的video,你将需要计算一个新的高度。

 // we convert the captured height ie 1080 to a 4:3 screen ratio and get the new height CGFloat newHeight = clipVideoTrack.naturalSize.height/3*4; 

然后使用newHeight修改这两行。

 videoComposition.renderSize = CGSizeMake(clipVideoTrack.naturalSize.height, newHeight); CGAffineTransform t1 = CGAffineTransformMakeTranslation(clipVideoTrack.naturalSize.height, -(clipVideoTrack.naturalSize.width - newHeight)/2 ); 

所以我们在这里做的是将renderSize设置为4:3的比例 – 确切的尺寸是基于input设备的。 然后,我们使用CGAffineTransform来转换video位置,以便我们在AVCaptureVideoPreviewLayer中看到的是呈现给我们的文件的内容。

编辑:如果你想把它放在一起,并根据设备的屏幕比例(3:2,4:3,16:9)裁剪video,并考虑到video方向,我们需要添加一些东西。

首先这里是修改后的示例代码,其中有一些重要的改变:

 // output file NSString* docFolder = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject]; NSString* outputPath = [docFolder stringByAppendingPathComponent:@"output2.mov"]; if ([[NSFileManager defaultManager] fileExistsAtPath:outputPath]) [[NSFileManager defaultManager] removeItemAtPath:outputPath error:nil]; // input file AVAsset* asset = [AVAsset assetWithURL:outputFileURL]; AVMutableComposition *composition = [AVMutableComposition composition]; [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid]; // input clip AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]; // crop clip to screen ratio UIInterfaceOrientation orientation = [self orientationForTrack:asset]; BOOL isPortrait = (orientation == UIInterfaceOrientationPortrait || orientation == UIInterfaceOrientationPortraitUpsideDown) ? YES: NO; CGFloat complimentSize = [self getComplimentSize:videoTrack.naturalSize.height]; CGSize videoSize; if(isPortrait) { videoSize = CGSizeMake(videoTrack.naturalSize.height, complimentSize); } else { videoSize = CGSizeMake(complimentSize, videoTrack.naturalSize.height); } AVMutableVideoComposition* videoComposition = [AVMutableVideoComposition videoComposition]; videoComposition.renderSize = videoSize; videoComposition.frameDuration = CMTimeMake(1, 30); AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30) ); // rotate and position video AVMutableVideoCompositionLayerInstruction* transformer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack]; CGFloat tx = (videoTrack.naturalSize.width-complimentSize)/2; if (orientation == UIInterfaceOrientationPortrait || orientation == UIInterfaceOrientationLandscapeRight) { // invert translation tx *= -1; } // t1: rotate and position video since it may have been cropped to screen ratio CGAffineTransform t1 = CGAffineTransformTranslate(videoTrack.preferredTransform, tx, 0); // t2/t3: mirror video horizontally CGAffineTransform t2 = CGAffineTransformTranslate(t1, isPortrait?0:videoTrack.naturalSize.width, isPortrait?videoTrack.naturalSize.height:0); CGAffineTransform t3 = CGAffineTransformScale(t2, isPortrait?1:-1, isPortrait?-1:1); [transformer setTransform:t3 atTime:kCMTimeZero]; instruction.layerInstructions = [NSArray arrayWithObject: transformer]; videoComposition.instructions = [NSArray arrayWithObject: instruction]; // export exporter = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPresetHighestQuality] ; exporter.videoComposition = videoComposition; exporter.outputURL=[NSURL fileURLWithPath:outputPath]; exporter.outputFileType=AVFileTypeQuickTimeMovie; [exporter exportAsynchronouslyWithCompletionHandler:^(void){ NSLog(@"Exporting done!"); // added export to library for testing ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init]; if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:[NSURL fileURLWithPath:outputPath]]) { [library writeVideoAtPathToSavedPhotosAlbum:[NSURL fileURLWithPath:outputPath] completionBlock:^(NSURL *assetURL, NSError *error) { NSLog(@"Saved to album"); if (error) { } }]; } }]; 

我们在这里添加的是调用基于裁剪尺寸到屏幕比例来获得video的新渲染尺寸。 一旦我们缩小尺寸,我们需要翻译的位置,以接近video。 所以我们抓住它的方向,把它向正确的方向移动。 这将解决我们用UIInterfaceOrientationLandscapeLeft看到的偏离中心的问题。 最后CGAffineTransform t2, t3水平镜像video。

以下是实现这一点的两个新方法:

 - (CGFloat)getComplimentSize:(CGFloat)size { CGRect screenRect = [[UIScreen mainScreen] bounds]; CGFloat ratio = screenRect.size.height / screenRect.size.width; // we have to adjust the ratio for 16:9 screens if (ratio == 1.775) ratio = 1.77777777777778; return size * ratio; } - (UIInterfaceOrientation)orientationForTrack:(AVAsset *)asset { UIInterfaceOrientation orientation = UIInterfaceOrientationPortrait; NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo]; if([tracks count] > 0) { AVAssetTrack *videoTrack = [tracks objectAtIndex:0]; CGAffineTransform t = videoTrack.preferredTransform; // Portrait if(ta == 0 && tb == 1.0 && tc == -1.0 && td == 0) { orientation = UIInterfaceOrientationPortrait; } // PortraitUpsideDown if(ta == 0 && tb == -1.0 && tc == 1.0 && td == 0) { orientation = UIInterfaceOrientationPortraitUpsideDown; } // LandscapeRight if(ta == 1.0 && tb == 0 && tc == 0 && td == 1.0) { orientation = UIInterfaceOrientationLandscapeRight; } // LandscapeLeft if(ta == -1.0 && tb == 0 && tc == 0 && td == -1.0) { orientation = UIInterfaceOrientationLandscapeLeft; } } return orientation; } 

这些非常简单。 唯一需要注意的是,在getComplimentSize:方法中,我们必须手动调整16:9的比率,因为iPhone5 +的分辨率在math上是真实的16:9。

AVCaptureVideoDataOutputAVCaptureOutput的一个具体子类,用于处理捕获的video中的未压缩帧或访问压缩帧。

AVCaptureVideoDataOutput的一个实例会产生可以使用其他媒体API处理的video帧。 您可以使用captureOutput:didOutputSampleBuffer:fromConnection: delegate方法访问框架。

configuration会话您可以使用会话中的预设来指定所需的图像质量和分辨率。 预设是一个常数,用于标识多种可能的configuration之一; 在某些情况下,实际configuration是特定于设备的:

https://developer.apple.com/library/mac/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html

这些预设值代表各种设备的实际值,请参阅“ 保存到电影文件 ”和“捕捉静止图像”。

如果你想设置特定尺寸的configuration,你应该在设置之前检查它是否被支持:

 if ([session canSetSessionPreset:AVCaptureSessionPreset1280x720]) { session.sessionPreset = AVCaptureSessionPreset1280x720; } else { // Handle the failure. } 

我相信你正在向后看问题。 不是裁剪或修改videostream到你的视图,然后不得不裁剪videostream,所有你真正需要做的就是确保你正在看的是你正在保存的东西。