OpenGL ES 2.0到iPad / iPhone上的video

尽pipeStackOverflow中有很好的信息,但我仍然在这里结束

我正在尝试在iPad 2上使用iOS 4.3编写一个OpenGL渲染缓冲区到video。 这正是我正在尝试的:

A)设置一个AVAssetWriterInputPixelBufferAdaptor

  1. 创build一个指向video文件的AVAssetWriter

  2. 设置适当的设置AVAssetWriterInput

  3. 设置AVAssetWriterInputPixelBufferAdaptor将数据添加到video文件

B)使用AVAssetWriterInputPixelBufferAdaptor将数据写入video文件

  1. 将OpenGL代码渲染到屏幕上

  2. 通过glReadPixels获取OpenGL缓冲区

  3. 从OpenGL数据创build一个CVPixelBufferRef

  4. 使用appendPixelBuffer方法将PixelBuffer附加到AVAssetWriterInputPixelBufferAdaptor

但是,我有这个问题。 我现在的策略是当按下button时设置AVAssetWriterInputPixelBufferAdaptor。 一旦AVAssetWriterInputPixelBufferAdaptor有效,我设置一个标志来指示EAGLView创build一个像素缓冲区,并通过appendPixelBuffer将它附加到video文件中给定数量的帧。

现在我的代码崩溃,因为它试图追加第二个像素缓冲区,给我以下错误:

-[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0 

这里是我的AVAsset设置代码(很多是基于Rudy Aramayo的代码,它在正常的图像工作,但没有设置纹理):

 - (void) testVideoWriter { //initialize global info MOVIE_NAME = @"Documents/Movie.mov"; CGSize size = CGSizeMake(480, 320); frameLength = CMTimeMake(1, 5); currentTime = kCMTimeZero; currentFrame = 0; NSString *MOVIE_PATH = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME]; NSError *error = nil; unlink([betaCompressionDirectory UTF8String]); videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory] fileType:AVFileTypeQuickTimeMovie error:&error]; NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey, [NSNumber numberWithInt:size.width], AVVideoWidthKey, [NSNumber numberWithInt:size.height], AVVideoHeightKey, nil]; writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings]; //writerInput.expectsMediaDataInRealTime = NO; NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil]; adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary]; [adaptor retain]; [videoWriter addInput:writerInput]; [videoWriter startWriting]; [videoWriter startSessionAtSourceTime:kCMTimeZero]; VIDEO_WRITER_IS_READY = true; } 

好吧,现在我的videoWriter和适配器已经设置好了,我告诉我的OpenGL渲染器为每一帧创build一个像素缓冲区:

 - (void) captureScreenVideo { if (!writerInput.readyForMoreMediaData) { return; } CGSize esize = CGSizeMake(eagl.backingWidth, eagl.backingHeight); NSInteger myDataLength = esize.width * esize.height * 4; GLuint *buffer = (GLuint *) malloc(myDataLength); glReadPixels(0, 0, esize.width, esize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer); CVPixelBufferRef pixel_buffer = NULL; CVPixelBufferCreateWithBytes (NULL, esize.width, esize.height, kCVPixelFormatType_32BGRA, buffer, 4 * esize.width, NULL, 0, NULL, &pixel_buffer); /* DON'T FREE THIS BEFORE USING pixel_buffer! */ //free(buffer); if(![adaptor appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) { NSLog(@"FAIL"); } else { NSLog(@"Success:%d", currentFrame); currentTime = CMTimeAdd(currentTime, frameLength); } free(buffer); CVPixelBufferRelease(pixel_buffer); } currentFrame++; if (currentFrame > MAX_FRAMES) { VIDEO_WRITER_IS_READY = false; [writerInput markAsFinished]; [videoWriter finishWriting]; [videoWriter release]; [self moveVideoToSavedPhotos]; } } 

最后,我将video移到相机胶卷上:

 - (void) moveVideoToSavedPhotos { ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init]; NSString *localVid = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME]; NSURL* fileURL = [NSURL fileURLWithPath:localVid]; [library writeVideoAtPathToSavedPhotosAlbum:fileURL completionBlock:^(NSURL *assetURL, NSError *error) { if (error) { NSLog(@"%@: Error saving context: %@", [self class], [error localizedDescription]); } }]; [library release]; } 

但是,正如我所说,我在appendPixelBuffer的调用崩溃。

对不起,发送这么多的代码,但我真的不知道我在做什么错。 这似乎是更新项目,将图像写入video是微不足道的,但我无法通过glReadPixels创build像素缓冲区并追加它。 这让我疯狂! 如果任何人有任何build议或OpenGL的工作代码示例 – >video,这将是惊人的…谢谢!

基于上面的代码,我刚刚在开源的GPUImage框架中得到了类似这样的工作,所以我想我会提供我的工作解决scheme。 在我的情况下,我可以像Srikumar所build议的那样使用一个像素缓冲池,而不用为每一帧手动创build像素缓冲区。

我首先configuration要录制的电影:

 NSError *error = nil; assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error]; if (error != nil) { NSLog(@"Error: %@", error); } NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init]; [outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey]; [outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey]; [outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey]; assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings]; assetWriterVideoInput.expectsMediaDataInRealTime = YES; // You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA. NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, [NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey, [NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey, nil]; assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary]; [assetWriter addInput:assetWriterVideoInput]; 

然后使用这个代码来抓取使用glReadPixels()每个渲染帧:

 CVPixelBufferRef pixel_buffer = NULL; CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &pixel_buffer); if ((pixel_buffer == NULL) || (status != kCVReturnSuccess)) { return; } else { CVPixelBufferLockBaseAddress(pixel_buffer, 0); GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer); glReadPixels(0, 0, videoSize.width, videoSize.height, GL_RGBA, GL_UNSIGNED_BYTE, pixelBufferData); } // May need to add a check here, because if two consecutive times with the same value are added to the movie, it aborts recording CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120); if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) { NSLog(@"Problem appending pixel buffer at time: %lld", currentTime.value); } else { // NSLog(@"Recorded pixel buffer at time: %lld", currentTime.value); } CVPixelBufferUnlockBaseAddress(pixel_buffer, 0); CVPixelBufferRelease(pixel_buffer); 

我注意到的一件事是,如果我试图追加两个像素缓冲区具有相同的整数时间值(在提供的基础上),整个logging将失败,input永远不会占用另一个像素缓冲区。 同样,如果我尝试追加一个像素缓冲区后从池中检索失败,它会中止logging。 因此,在上面的代码早日救助。

除了上面的代码之外,我使用颜色混合的着色器将我的OpenGL ES场景中的RGBA渲染转换为BGRA,以便AVAssetWriter进行快速编码。 有了这个,我能够在iPhone 4上以30 FPS录制640x480video。

同样,所有的代码都可以在GPUImageMovieWriter类下的GPUImage存储库中find。

看起来像在这里做几件事情 –

  1. 根据文档,看起来像创build像素缓冲区的build议方法是使用CVPixelBufferPoolCreatePixelBuffer上的adaptor.pixelBufferPool
  2. 然后可以通过使用CVPixelBufferGetBaseAddressCVPixelBufferLockBaseAddress获取地址来填充缓冲区,然后使用CVPixelBufferLockBaseAddress解锁内存,然后将其传递到适配器。
  3. writerInput.readyForMoreMediaDataYES时,可以将像素缓冲区传递给input。 这意味着“等到准备就绪”。 一直到它成为YES一个usleep作品,但你也可以使用键值观察。

其余的东西是没问题的。 有了这个,原始的代码产生了一个可播放的video文件。

“如果有人绊倒了这一点,我终于明白了这一点,现在比我更了解一点。 在上面的代码中,我在调用appendPixelBuffer之前释放了从glReadPixels填充的数据缓冲区时出错。 也就是说,因为我已经创build了CVPixelBufferRef,所以我认为释放它是安全的。 我已经编辑了上面的代码,所以像素缓冲区实际上有数据了! – 安格斯福布斯11年6月28日在5:58“

这是你崩溃的真正原因,我也遇到过这个问题。 即使创build了CVPixelBufferRef,也不要释放缓冲区。

似乎不正确的内存pipe理。 错误指出消息发送到__NSCFDictionary而不是AVAssetWriterInputPixelBufferAdaptor是高度可疑的。

为什么你需要手动retain适配器? 因为CocoaTouch完全是ARC,所以这看起来很诡异。

这是一个初学者,以确定内存问题。

从你的错误信息-[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0看起来像你的pixelBufferAdapter被释放,现在它指向一个字典。

我所得到的唯一的代码是:

https://demonicactivity.blogspot.com/2016/11/tech-serious-ios-developers-use-every.html

  // [_context presentRenderbuffer:GL_RENDERBUFFER]; dispatch_async(dispatch_get_main_queue(), ^{ @autoreleasepool { // To capture the output to an OpenGL render buffer... NSInteger myDataLength = _backingWidth * _backingHeight * 4; GLubyte *buffer = (GLubyte *) malloc(myDataLength); glPixelStorei(GL_UNPACK_ALIGNMENT, 8); glReadPixels(0, 0, _backingWidth, _backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer); // To swap the pixel buffer to a CoreGraphics context (as a CGImage) CGDataProviderRef provider; CGColorSpaceRef colorSpaceRef; CGImageRef imageRef; CVPixelBufferRef pixelBuffer; @try { provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, &releaseDataCallback); int bitsPerComponent = 8; int bitsPerPixel = 32; int bytesPerRow = 4 * _backingWidth; colorSpaceRef = CGColorSpaceCreateDeviceRGB(); CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault; CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault; imageRef = CGImageCreate(_backingWidth, _backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent); } @catch (NSException *exception) { NSLog(@"Exception: %@", [exception reason]); } @finally { if (imageRef) { // To convert the CGImage to a pixel buffer (for writing to a file using AVAssetWriter) pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:imageRef]; // To verify the integrity of the pixel buffer (by converting it back to a CGIImage, and thendisplaying it in a layer) imageLayer.contents = (__bridge id)[CVCGImageUtil cgImageFromPixelBuffer:pixelBuffer context:_ciContext]; } CGDataProviderRelease(provider); CGColorSpaceRelease(colorSpaceRef); CGImageRelease(imageRef); } } }); 

。 。 。

在CGDataProvider类的实例中释放数据的callback:

 static void releaseDataCallback (void *info, const void *data, size_t size) { free((void*)data); } 

CVCGImageUtil类接口和实现文件分别是:

 @import Foundation; @import CoreMedia; @import CoreGraphics; @import QuartzCore; @import CoreImage; @import UIKit; @interface CVCGImageUtil : NSObject + (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context; + (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image; + (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image; @end #import "CVCGImageUtil.h" @implementation CVCGImageUtil + (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context { // CVPixelBuffer to CoreImage CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer]; image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(M_PI)]; CGPoint origin = [image extent].origin; image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)]; // CoreImage to CGImage via CoreImage context CGImageRef cgImage = [context createCGImage:image fromRect:[image extent]]; // CGImage to UIImage (OPTIONAL) //UIImage *uiImage = [UIImage imageWithCGImage:cgImage]; //return (CGImageRef)uiImage.CGImage; return cgImage; } + (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image { CGSize frameSize = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image)); NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil]; CVPixelBufferRef pxbuffer = NULL; CVReturn status = CVPixelBufferCreate( kCFAllocatorDefault, frameSize.width, frameSize.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options, &pxbuffer); NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL); CVPixelBufferLockBaseAddress(pxbuffer, 0); void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer); CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate( pxdata, frameSize.width, frameSize.height, 8, CVPixelBufferGetBytesPerRow(pxbuffer), rgbColorSpace, (CGBitmapInfo)kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image); CGColorSpaceRelease(rgbColorSpace); CGContextRelease(context); CVPixelBufferUnlockBaseAddress(pxbuffer, 0); return pxbuffer; } + (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image { CVPixelBufferRef pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:image]; CMSampleBufferRef newSampleBuffer = NULL; CMSampleTimingInfo timimgInfo = kCMTimingInfoInvalid; CMVideoFormatDescriptionRef videoInfo = NULL; CMVideoFormatDescriptionCreateForImageBuffer( NULL, pixelBuffer, &videoInfo); CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, videoInfo, &timimgInfo, &newSampleBuffer); return newSampleBuffer; } @end 

这就回答了你的问题的B部分。 部分A在一个单独的答案…

我从来没有用这个代码读写video文件到iPhone, 在你的实现中,你只需要用在实现方法结尾处find的processFrame方法中的调用来调用将像素缓冲区作为parameter passing给它的任何方法,否则修改该方法返回像上面的示例代码生成的像素缓冲区 – 这是基本的,所以你应该没问题:

 // // ExportVideo.h // ChromaFilterTest // // Created by James Alan Bush on 10/30/16. // Copyright © 2016 James Alan Bush. All rights reserved. // #import <Foundation/Foundation.h> #import <AVFoundation/AVFoundation.h> #import <CoreMedia/CoreMedia.h> #import "GLKitView.h" @interface ExportVideo : NSObject { AVURLAsset *_asset; AVAssetReader *_reader; AVAssetWriter *_writer; NSString *_outputURL; NSURL *_outURL; AVAssetReaderTrackOutput *_readerAudioOutput; AVAssetWriterInput *_writerAudioInput; AVAssetReaderTrackOutput *_readerVideoOutput; AVAssetWriterInput *_writerVideoInput; CVPixelBufferRef _currentBuffer; dispatch_queue_t _mainSerializationQueue; dispatch_queue_t _rwAudioSerializationQueue; dispatch_queue_t _rwVideoSerializationQueue; dispatch_group_t _dispatchGroup; BOOL _cancelled; BOOL _audioFinished; BOOL _videoFinished; AVAssetWriterInputPixelBufferAdaptor *_pixelBufferAdaptor; } @property (readwrite, retain) NSURL *url; @property (readwrite, retain) GLKitView *renderer; - (id)initWithURL:(NSURL *)url usingRenderer:(GLKitView *)renderer; - (void)startProcessing; @end // // ExportVideo.m // ChromaFilterTest // // Created by James Alan Bush on 10/30/16. // Copyright © 2016 James Alan Bush. All rights reserved. // #import "ExportVideo.h" #import "GLKitView.h" @implementation ExportVideo @synthesize url = _url; - (id)initWithURL:(NSURL *)url usingRenderer:(GLKitView *)renderer { NSLog(@"ExportVideo"); if (!(self = [super init])) { return nil; } self.url = url; self.renderer = renderer; NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self]; _mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL); NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self]; _rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL); NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self]; _rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL); return self; } - (void)startProcessing { NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey]; _asset = [[AVURLAsset alloc] initWithURL:self.url options:inputOptions]; NSLog(@"URL: %@", self.url); _cancelled = NO; [_asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: ^{ dispatch_async(_mainSerializationQueue, ^{ if (_cancelled) return; BOOL success = YES; NSError *localError = nil; success = ([_asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded); if (success) { NSFileManager *fm = [NSFileManager defaultManager]; NSString *localOutputPath = [self.url path]; if ([fm fileExistsAtPath:localOutputPath]) //success = [fm removeItemAtPath:localOutputPath error:&localError]; success = TRUE; } if (success) success = [self setupAssetReaderAndAssetWriter:&localError]; if (success) success = [self startAssetReaderAndWriter:&localError]; if (!success) [self readingAndWritingDidFinishSuccessfully:success withError:localError]; }); }]; } - (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError { // Create and initialize the asset reader. _reader = [[AVAssetReader alloc] initWithAsset:_asset error:outError]; BOOL success = (_reader != nil); if (success) { // If the asset reader was successfully initialized, do the same for the asset writer. NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); _outputURL = paths[0]; NSFileManager *manager = [NSFileManager defaultManager]; [manager createDirectoryAtPath:_outputURL withIntermediateDirectories:YES attributes:nil error:nil]; _outputURL = [_outputURL stringByAppendingPathComponent:@"output.mov"]; [manager removeItemAtPath:_outputURL error:nil]; _outURL = [NSURL fileURLWithPath:_outputURL]; _writer = [[AVAssetWriter alloc] initWithURL:_outURL fileType:AVFileTypeQuickTimeMovie error:outError]; success = (_writer != nil); } if (success) { // If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used. AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil; NSArray *audioTracks = [_asset tracksWithMediaType:AVMediaTypeAudio]; if ([audioTracks count] > 0) assetAudioTrack = [audioTracks objectAtIndex:0]; NSArray *videoTracks = [_asset tracksWithMediaType:AVMediaTypeVideo]; if ([videoTracks count] > 0) assetVideoTrack = [videoTracks objectAtIndex:0]; if (assetAudioTrack) { // If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output. NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] }; _readerAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings]; [_reader addOutput:_readerAudioOutput]; // Then, set the compression settings to 128kbps AAC and create the asset writer input. AudioChannelLayout stereoChannelLayout = { .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo, .mChannelBitmap = 0, .mNumberChannelDescriptions = 0 }; NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)]; NSDictionary *compressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC], AVEncoderBitRateKey : [NSNumber numberWithInteger:128000], AVSampleRateKey : [NSNumber numberWithInteger:44100], AVChannelLayoutKey : channelLayoutAsData, AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2] }; _writerAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings]; [_writer addInput:_writerAudioInput]; } if (assetVideoTrack) { // If there is a video track to read, set the decompression settings for YUV and create the asset reader output. NSDictionary *decompressionVideoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] }; _readerVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings]; [_reader addOutput:_readerVideoOutput]; CMFormatDescriptionRef formatDescription = NULL; // Grab the video format descriptions from the video track and grab the first one if it exists. NSArray *formatDescriptions = [assetVideoTrack formatDescriptions]; if ([formatDescriptions count] > 0) formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0]; CGSize trackDimensions = { .width = 0.0, .height = 0.0, }; // If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself. if (formatDescription) trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false); else trackDimensions = [assetVideoTrack naturalSize]; NSDictionary *compressionSettings = nil; // If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video. if (formatDescription) { NSDictionary *cleanAperture = nil; NSDictionary *pixelAspectRatio = nil; CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture); if (cleanApertureFromCMFormatDescription) { cleanAperture = @{ AVVideoCleanApertureWidthKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth), AVVideoCleanApertureHeightKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight), AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset), AVVideoCleanApertureVerticalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset) }; } CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio); if (pixelAspectRatioFromCMFormatDescription) { pixelAspectRatio = @{ AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing), AVVideoPixelAspectRatioVerticalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing) }; } // Add whichever settings we could grab from the format description to the compression settings dictionary. if (cleanAperture || pixelAspectRatio) { NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary]; if (cleanAperture) [mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey]; if (pixelAspectRatio) [mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey]; compressionSettings = mutableCompressionSettings; } } // Create the video settings dictionary for H.264. NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : [NSNumber numberWithDouble:trackDimensions.width], AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height] }; // Put the compression settings into the video settings dictionary if we were able to grab them. if (compressionSettings) [videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey]; // Create the asset writer input and add it to the asset writer. _writerVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetVideoTrack mediaType] outputSettings:videoSettings]; NSDictionary *pixelBufferAdaptorSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange), (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary], (id)kCVPixelBufferWidthKey : [NSNumber numberWithDouble:trackDimensions.width], (id)kCVPixelBufferHeightKey : [NSNumber numberWithDouble:trackDimensions.height] }; _pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:_writerVideoInput sourcePixelBufferAttributes:pixelBufferAdaptorSettings]; [_writer addInput:_writerVideoInput]; } } return success; } - (BOOL)startAssetReaderAndWriter:(NSError **)outError { BOOL success = YES; // Attempt to start the asset reader. success = [_reader startReading]; if (!success) { *outError = [_reader error]; NSLog(@"Reader error"); } if (success) { // If the reader started successfully, attempt to start the asset writer. success = [_writer startWriting]; if (!success) { *outError = [_writer error]; NSLog(@"Writer error"); } } if (success) { // If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session. _dispatchGroup = dispatch_group_create(); [_writer startSessionAtSourceTime:kCMTimeZero]; _audioFinished = NO; _videoFinished = NO; if (_writerAudioInput) { // If there is audio to reencode, enter the dispatch group before beginning the work. dispatch_group_enter(_dispatchGroup); // Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on. [_writerAudioInput requestMediaDataWhenReadyOnQueue:_rwAudioSerializationQueue usingBlock:^{ // Because the block is called asynchronously, check to see whether its task is complete. if (_audioFinished) return; BOOL completedOrFailed = NO; // If the task isn't complete yet, make sure that the input is actually ready for more media data. while ([_writerAudioInput isReadyForMoreMediaData] && !completedOrFailed) { // Get the next audio sample buffer, and append it to the output file. CMSampleBufferRef sampleBuffer = [_readerAudioOutput copyNextSampleBuffer]; if (sampleBuffer != NULL) { BOOL success = [_writerAudioInput appendSampleBuffer:sampleBuffer]; CFRelease(sampleBuffer); sampleBuffer = NULL; completedOrFailed = !success; } else { completedOrFailed = YES; } } if (completedOrFailed) { // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished). BOOL oldFinished = _audioFinished; _audioFinished = YES; if (oldFinished == NO) { [_writerAudioInput markAsFinished]; } dispatch_group_leave(_dispatchGroup); } }]; } if (_writerVideoInput) { // If we had video to reencode, enter the dispatch group before beginning the work. dispatch_group_enter(_dispatchGroup); // Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on. [_writerVideoInput requestMediaDataWhenReadyOnQueue:_rwVideoSerializationQueue usingBlock:^{ // Because the block is called asynchronously, check to see whether its task is complete. if (_videoFinished) return; BOOL completedOrFailed = NO; // If the task isn't complete yet, make sure that the input is actually ready for more media data. while ([_writerVideoInput isReadyForMoreMediaData] && !completedOrFailed) { // Get the next video sample buffer, and append it to the output file. CMSampleBufferRef sampleBuffer = [_readerVideoOutput copyNextSampleBuffer]; CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); _currentBuffer = pixelBuffer; [self performSelectorOnMainThread:@selector(processFrame) withObject:nil waitUntilDone:YES]; if (_currentBuffer != NULL) { //BOOL success = [_writerVideoInput appendSampleBuffer:sampleBuffer]; BOOL success = [_pixelBufferAdaptor appendPixelBuffer:_currentBuffer withPresentationTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)]; CFRelease(sampleBuffer); sampleBuffer = NULL; completedOrFailed = !success; } else { completedOrFailed = YES; } } if (completedOrFailed) { // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished). BOOL oldFinished = _videoFinished; _videoFinished = YES; if (oldFinished == NO) { [_writerVideoInput markAsFinished]; } dispatch_group_leave(_dispatchGroup); } }]; } // Set up the notification that the dispatch group will send when the audio and video work have both finished. dispatch_group_notify(_dispatchGroup, _mainSerializationQueue, ^{ BOOL finalSuccess = YES; NSError *finalError = nil; // Check to see if the work has finished due to cancellation. if (_cancelled) { // If so, cancel the reader and writer. [_reader cancelReading]; [_writer cancelWriting]; } else { // If cancellation didn't occur, first make sure that the asset reader didn't fail. if ([_reader status] == AVAssetReaderStatusFailed) { finalSuccess = NO; finalError = [_reader error]; NSLog(@"_reader finalError: %@", finalError); } // If the asset reader didn't fail, attempt to stop the asset writer and check for any errors. [_writer finishWritingWithCompletionHandler:^{ [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:[_writer error]]; }]; } // Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful. }); } // Return success here to indicate whether the asset reader and writer were started successfully. return success; } - (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error { if (!success) { // If the reencoding process failed, we need to cancel the asset reader and writer. [_reader cancelReading]; [_writer cancelWriting]; dispatch_async(dispatch_get_main_queue(), ^{ // Handle any UI tasks here related to failure. }); } else { // Reencoding was successful, reset booleans. _cancelled = NO; _videoFinished = NO; _audioFinished = NO; dispatch_async(dispatch_get_main_queue(), ^{ UISaveVideoAtPathToSavedPhotosAlbum(_outputURL, nil, nil, nil); }); } NSLog(@"readingAndWritingDidFinishSuccessfully success = %@ : Error = %@", (success == 0) ? @"NO" : @"YES", error); } - (void)processFrame { if (_currentBuffer) { if (kCVReturnSuccess == CVPixelBufferLockBaseAddress(_currentBuffer, kCVPixelBufferLock_ReadOnly)) { [self.renderer processPixelBuffer:_currentBuffer]; CVPixelBufferUnlockBaseAddress(_currentBuffer, kCVPixelBufferLock_ReadOnly); } else { NSLog(@"processFrame END"); return; } } } @end