AVFoundation – 反转AVAsset并输出video文件
我已经看过这个问题几次,但他们都没有任何工作的答案。
需要反转并输出一个video文件(不能只是反向播放),保持与源video相同的压缩,格式和帧速率。
理想情况下,解决scheme将能够在内存或缓冲区中执行此操作,并避免将帧生成为图像文件(例如:使用AVAssetImageGenerator
),然后重新编译(资源密集,不可靠的时序结果,帧/图像质量从原始等)。
–
我的贡献:这仍然不起作用,但迄今为止我已经尝试过的最好的:
- 使用
AVAssetReader
将示例帧读入CMSampleBufferRef[]
的数组中。 - 用
AVAssetWriter
以相反的顺序写回来。 - 问题:似乎每个帧的时间保存在
CMSampleBufferRef
所以即使将它们追加也不行。 - 接下来,我尝试用反向/镜像帧交换每个帧的定时信息。
- 问题:这会导致
AVAssetWriter
发生未知错误。 -
下一步:我要查看
AVAssetWriterInputPixelBufferAdaptor
- (AVAsset *)assetByReversingAsset:(AVAsset *)asset { NSURL *tmpFileURL = [NSURL URLWithString:@"/tmp/test.mp4"]; NSError *error; // initialize the AVAssetReader that will read the input asset track AVAssetReader *reader = [[AVAssetReader alloc] initWithAsset:asset error:&error]; AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] lastObject]; AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:nil]; [reader addOutput:readerOutput]; [reader startReading]; // Read in the samples into an array NSMutableArray *samples = [[NSMutableArray alloc] init]; while(1) { CMSampleBufferRef sample = [readerOutput copyNextSampleBuffer]; if (sample == NULL) { break; } [samples addObject:(__bridge id)sample]; CFRelease(sample); } // initialize the the writer that will save to our temporary file. CMFormatDescriptionRef formatDescription = CFBridgingRetain([videoTrack.formatDescriptions lastObject]); AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:nil sourceFormatHint:formatDescription]; CFRelease(formatDescription); AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:tmpFileURL fileType:AVFileTypeMPEG4 error:&error]; [writerInput setExpectsMediaDataInRealTime:NO]; [writer addInput:writerInput]; [writer startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp((__bridge CMSampleBufferRef)samples[0])]; [writer startWriting]; // Traverse the sample frames in reverse order for(NSInteger i = samples.count-1; i >= 0; i--) { CMSampleBufferRef sample = (__bridge CMSampleBufferRef)samples[i]; // Since the timing information is built into the CMSampleBufferRef // We will need to make a copy of it with new timing info. Will copy // the timing data from the mirror frame at samples[samples.count - i -1] CMItemCount numSampleTimingEntries; CMSampleBufferGetSampleTimingInfoArray((__bridge CMSampleBufferRef)samples[samples.count - i -1], 0, nil, &numSampleTimingEntries); CMSampleTimingInfo *timingInfo = malloc(sizeof(CMSampleTimingInfo) * numSampleTimingEntries); CMSampleBufferGetSampleTimingInfoArray((__bridge CMSampleBufferRef)sample, numSampleTimingEntries, timingInfo, &numSampleTimingEntries); CMSampleBufferRef sampleWithCorrectTiming; CMSampleBufferCreateCopyWithNewTiming( kCFAllocatorDefault, sample, numSampleTimingEntries, timingInfo, &sampleWithCorrectTiming); if (writerInput.readyForMoreMediaData) { [writerInput appendSampleBuffer:sampleWithCorrectTiming]; } CFRelease(sampleWithCorrectTiming); free(timingInfo); } [writer finishWriting]; return [AVAsset assetWithURL:tmpFileURL]; }
在过去的几天工作,并能够得到它的工作。
源代码在这里: http : //www.andyhin.com/post/5/reverse-video-avfoundation
使用AVAssetReader
读取样本/帧,提取图像/像素缓冲区,然后附加镜像帧的显示时间。
- AVPlayer缓冲,暂停通知和海报框架
- OBJ。 C – QR阅读应用程序运行速度太慢
- 创build一个应用程序,从iPhone屏幕创build一个video,并添加来自耳机/audioinput的audio
- kCVPixelFormatType_420YpCbCr8BiPlanarFullRange帧到UIImage转换
- 尝试设置(空)audio设备采样率的错误“!dat”
- 无法使用AVAssetExportSession修剪video
- 使用AVFoundation混合图像和video
- 如何使用AVFoundation为您的video添加不同的图像和不同的CMTime
- 合并video(AVFoundation)