使用AVFoundation混合图像和video

我试图在图像拼接到一个预先存在的video来创build一个新的video文件在Mac上使用AVFoundation。

到目前为止,我已经阅读了苹果文档的例子,

ASSETWriterinput从iPhone上的UIImagesvideo制作video问题

使用AVVideoCompositionCoreAnimationTool将video与静态图像混合在CALayer中

AVFoundation教程:添加覆盖和animationvideo和一些其他的SO链接

现在,这些已经被certificate是非常有用的,但是我的问题是,我没有创build一个静态水印或覆盖,我想要在video的各个部分之间放置图像。 到目前为止,我已经设法获取video,并创build空白部分插入这些图像,并将其导出。

我的问题是让图像插入自己在这些空白部分。 我能看到的唯一方法就是创build一系列的animation图层,以在正确的时间改变其不透明度,但似乎无法使animation起作用。

下面的代码是我用来创buildvideo片段和图层animation。

//https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_Editing.html#//apple_ref/doc/uid/TP40010188-CH8-SW7 // let's start by making our video composition AVMutableComposition* mutableComposition = [AVMutableComposition composition]; AVMutableCompositionTrack* mutableCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid]; AVMutableVideoComposition* mutableVideoComposition = [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:gVideoAsset]; // if the first point's frame doesn't start on 0 if (gFrames[0].startTime.value != 0) { DebugLog("Inserting vid at 0"); // then add the video track to the composition track with a time range from 0 to the first point's startTime [mutableCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, gFrames[0].startTime) ofTrack:gVideoTrack atTime:kCMTimeZero error:&gError]; } if(gError) { DebugLog("Error inserting original video segment"); GetError(); } // create our parent layer and video layer CALayer* parentLayer = [CALayer layer]; CALayer* videoLayer = [CALayer layer]; parentLayer.frame = CGRectMake(0, 0, 1280, 720); videoLayer.frame = CGRectMake(0, 0, 1280, 720); [parentLayer addSublayer:videoLayer]; // create an offset value that should be added to each point where a new video segment should go CMTime timeOffset = CMTimeMake(0, 600); // loop through each additional frame for(int i = 0; i < gFrames.size(); i++) { // create an animation layer and assign it's content to the CGImage of the frame CALayer* Frame = [CALayer layer]; Frame.contents = (__bridge id)gFrames[i].frameImage; Frame.frame = CGRectMake(0, 720, 1280, -720); DebugLog("inserting empty time range"); // add frame point to the composition track starting at the point's start time // insert an empty time range for the duration of the frame animation [mutableCompositionTrack insertEmptyTimeRange:CMTimeRangeMake(CMTimeAdd(gFrames[i].startTime, timeOffset), gFrames[i].duration)]; // update the time offset by the duration timeOffset = CMTimeAdd(timeOffset, gFrames[i].duration); // make the layer completely transparent Frame.opacity = 0.0f; // create an animation for setting opacity to 0 on start CABasicAnimation* frameAnim = [CABasicAnimation animationWithKeyPath:@"opacity"]; frameAnim.duration = 1.0f; frameAnim.repeatCount = 0; frameAnim.autoreverses = NO; frameAnim.fromValue = [NSNumber numberWithFloat:0.0]; frameAnim.toValue = [NSNumber numberWithFloat:0.0]; frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero; frameAnim.speed = 1.0f; [Frame addAnimation:frameAnim forKey:@"animateOpacity"]; // create an animation for setting opacity to 1 frameAnim = [CABasicAnimation animationWithKeyPath:@"opacity"]; frameAnim.duration = 1.0f; frameAnim.repeatCount = 0; frameAnim.autoreverses = NO; frameAnim.fromValue = [NSNumber numberWithFloat:1.0]; frameAnim.toValue = [NSNumber numberWithFloat:1.0]; frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero + CMTimeGetSeconds(gFrames[i].startTime); frameAnim.speed = 1.0f; [Frame addAnimation:frameAnim forKey:@"animateOpacity"]; // create an animation for setting opacity to 0 frameAnim = [CABasicAnimation animationWithKeyPath:@"opacity"]; frameAnim.duration = 1.0f; frameAnim.repeatCount = 0; frameAnim.autoreverses = NO; frameAnim.fromValue = [NSNumber numberWithFloat:0.0]; frameAnim.toValue = [NSNumber numberWithFloat:0.0]; frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero + CMTimeGetSeconds(gFrames[i].endTime); frameAnim.speed = 1.0f; [Frame addAnimation:frameAnim forKey:@"animateOpacity"]; // add the frame layer to our parent layer [parentLayer addSublayer:Frame]; gError = nil; // if there's another point after this one if( i < gFrames.size()-1) { // add our video file to the composition with a range of this point's end and the next point's start [mutableCompositionTrack insertTimeRange:CMTimeRangeMake(gFrames[i].startTime, CMTimeMake(gFrames[i+1].startTime.value - gFrames[i].startTime.value, 600)) ofTrack:gVideoTrack atTime:CMTimeAdd(gFrames[i].startTime, timeOffset) error:&gError]; } // else just add our video file with a range of this points end point and the videos duration else { [mutableCompositionTrack insertTimeRange:CMTimeRangeMake(gFrames[i].startTime, CMTimeSubtract(gVideoAsset.duration, gFrames[i].startTime)) ofTrack:gVideoTrack atTime:CMTimeAdd(gFrames[i].startTime, timeOffset) error:&gError]; } if(gError) { char errorMsg[256]; sprintf(errorMsg, "Error inserting original video segment at: %d", i); DebugLog(errorMsg); GetError(); } } 

现在,在该段中,将帧的不透明度设置为0.0f,但是,如果将其设置为1.0f,则只需将这些帧中的最后一个放置在video上方整个持续时间即可。

之后,使用AVAssetExportSession导出video,如下所示

 mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer]; // create a layer instruction for our newly created animation tool AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:gVideoTrack]; AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; [instruction setTimeRange:CMTimeRangeMake(kCMTimeZero, [mutableComposition duration])]; [layerInstruction setOpacity:1.0f atTime:kCMTimeZero]; [layerInstruction setOpacity:0.0f atTime:mutableComposition.duration]; instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction]; // set the instructions on our videoComposition mutableVideoComposition.instructions = [NSArray arrayWithObject:instruction]; // export final composition to a video file // convert the videopath into a url for our AVAssetWriter to create a file at NSString* vidPath = CreateNSString(outputVideoPath); NSURL* vidURL = [NSURL fileURLWithPath:vidPath]; AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPreset1280x720]; exporter.outputFileType = AVFileTypeMPEG4; exporter.outputURL = vidURL; exporter.videoComposition = mutableVideoComposition; exporter.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration); // Asynchronously export the composition to a video file and save this file to the camera roll once export completes. [exporter exportAsynchronouslyWithCompletionHandler:^{ dispatch_async(dispatch_get_main_queue(), ^{ if (exporter.status == AVAssetExportSessionStatusCompleted) { DebugLog("!!!file created!!!"); _Close(); } else if(exporter.status == AVAssetExportSessionStatusFailed) { DebugLog("failed damn"); DebugLog(cStringCopy([[[exporter error] localizedDescription] UTF8String])); DebugLog(cStringCopy([[[exporter error] description] UTF8String])); _Close(); } else { DebugLog("NoIdea"); _Close(); } }); }]; } 

我感觉到animation没有开始,但我不知道。 我是否正确的方式来拼接图像数据到这样的video?

任何援助将不胜感激。

那么我以另一种方式解决了我的问题。 animation路线不起作用,所以我的解决scheme是将所有可插入的图像编译成临时video文件,然后使用该video将图像插入到我的最终输出video中。

从我最初发布的第一个链接开始ASSETWriterInput用于从iPhone上的UIImages制作video问题我创build了以下函数来创build我的临时video

 void CreateFrameImageVideo(NSString* path) { NSLog(@"Creating writer at path %@", path); NSError *error = nil; AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL: [NSURL fileURLWithPath:path] fileType:AVFileTypeMPEG4 error:&error]; NSLog(@"Creating video codec settings"); NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:gVideoTrack.estimatedDataRate/*128000*/], AVVideoAverageBitRateKey, [NSNumber numberWithInt:gVideoTrack.nominalFrameRate],AVVideoMaxKeyFrameIntervalKey, AVVideoProfileLevelH264MainAutoLevel, AVVideoProfileLevelKey, nil]; NSLog(@"Creating video settings"); NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoCodecH264, AVVideoCodecKey, codecSettings,AVVideoCompressionPropertiesKey, [NSNumber numberWithInt:1280], AVVideoWidthKey, [NSNumber numberWithInt:720], AVVideoHeightKey, nil]; NSLog(@"Creating writter input"); AVAssetWriterInput* writerInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings] retain]; NSLog(@"Creating adaptor"); AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:nil]; [videoWriter addInput:writerInput]; NSLog(@"Starting session"); //Start a session: [videoWriter startWriting]; [videoWriter startSessionAtSourceTime:kCMTimeZero]; CMTime timeOffset = kCMTimeZero;//CMTimeMake(0, 600); NSLog(@"Video Width %d, Height: %d, writing frame video to file", gWidth, gHeight); CVPixelBufferRef buffer; for(int i = 0; i< gAnalysisFrames.size(); i++) { while (adaptor.assetWriterInput.readyForMoreMediaData == FALSE) { NSLog(@"Waiting inside a loop"); NSDate *maxDate = [NSDate dateWithTimeIntervalSinceNow:0.1]; [[NSRunLoop currentRunLoop] runUntilDate:maxDate]; } //Write samples: buffer = pixelBufferFromCGImage(gAnalysisFrames[i].frameImage, gWidth, gHeight); [adaptor appendPixelBuffer:buffer withPresentationTime:timeOffset]; timeOffset = CMTimeAdd(timeOffset, gAnalysisFrames[i].duration); } while (adaptor.assetWriterInput.readyForMoreMediaData == FALSE) { NSLog(@"Waiting outside a loop"); NSDate *maxDate = [NSDate dateWithTimeIntervalSinceNow:0.1]; [[NSRunLoop currentRunLoop] runUntilDate:maxDate]; } buffer = pixelBufferFromCGImage(gAnalysisFrames[gAnalysisFrames.size()-1].frameImage, gWidth, gHeight); [adaptor appendPixelBuffer:buffer withPresentationTime:timeOffset]; NSLog(@"Finishing session"); //Finish the session: [writerInput markAsFinished]; [videoWriter endSessionAtSourceTime:timeOffset]; BOOL successfulWrite = [videoWriter finishWriting]; // if we failed to write the video if(!successfulWrite) { NSLog(@"Session failed with error: %@", [[videoWriter error] description]); // delete the temporary file created NSFileManager *fileManager = [NSFileManager defaultManager]; if ([fileManager fileExistsAtPath:path]) { NSError *error; if ([fileManager removeItemAtPath:path error:&error] == NO) { NSLog(@"removeItemAtPath %@ error:%@", path, error); } } } else { NSLog(@"Session complete"); } [writerInput release]; } 

video创build完成后,它被作为一个AVAsset加载,它的轨道被提取,然后通过replace下面的行插入video(从原始post中的第一个代码块)

 [mutableCompositionTrack insertEmptyTimeRange:CMTimeRangeMake(CMTimeAdd(gFrames[i].startTime, timeOffset), gFrames[i].duration)]; 

有:

 [mutableCompositionTrack insertTimeRange:CMTimeRangeMake(timeOffset,gAnalysisFrames[i].duration) ofTrack:gFramesTrack atTime:CMTimeAdd(gAnalysisFrames[i].startTime, timeOffset) error:&gError]; 

其中gFramesTrack是从临时帧video创build的AVAssetTrack。

所有与CALayer和CABasicAnimation对象有关的代码已被删除,因为它只是不工作。

不是最优雅的解决scheme,我不认为只有一个,至less可以工作。 我希望有人认为这有用。

此代码也适用于iOS设备(使用iPad 3进行testing)

注意:第一篇文章中的DebugLog函数只是callback打印出日志消息的函数,如果需要的话,可以用NSLog()调用代替它们。