在iOS上处理video时跳过帧

我正在尝试处理本地video文件,只是对像素数据做一些分析。 没有什么是输出。 我目前的代码遍历video的每一帧,但我实际上想一次跳过〜15帧,以加快速度。 有没有办法跳过帧而不解码他们?

在Ffmpeg中,我可以简单地调用av_read_frame,而不用调用avcodec_decode_video2。

提前致谢! 这是我现在的代码:

- (void) readMovie:(NSURL *)url { [self performSelectorOnMainThread:@selector(updateInfo:) withObject:@"scanning" waitUntilDone:YES]; startTime = [NSDate date]; AVURLAsset * asset = [AVURLAsset URLAssetWithURL:url options:nil]; [asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: ^{ dispatch_async(dispatch_get_main_queue(), ^{ AVAssetTrack * videoTrack = nil; NSArray * tracks = [asset tracksWithMediaType:AVMediaTypeVideo]; if ([tracks count] == 1) { videoTrack = [tracks objectAtIndex:0]; videoDuration = CMTimeGetSeconds([videoTrack timeRange].duration); NSError * error = nil; // _movieReader is a member variable _movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error]; if (error) NSLog(@"%@", error.localizedDescription); NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; NSNumber* value = [NSNumber numberWithUnsignedInt: kCVPixelFormatType_420YpCbCr8Planar]; NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; AVAssetReaderTrackOutput* output = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:videoSettings]; output.alwaysCopiesSampleData = NO; [_movieReader addOutput:output]; if ([_movieReader startReading]) { NSLog(@"reading started"); [self readNextMovieFrame]; } else { NSLog(@"reading can't be started"); } } }); }]; } - (void) readNextMovieFrame { //NSLog(@"readNextMovieFrame called"); if (_movieReader.status == AVAssetReaderStatusReading) { //NSLog(@"status is reading"); AVAssetReaderTrackOutput * output = [_movieReader.outputs objectAtIndex:0]; CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer]; if (sampleBuffer) { // I'm guessing this is the expensive part that we can skip if we want to skip frames CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // Lock the image buffer CVPixelBufferLockBaseAddress(imageBuffer,0); // Get information of the image uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); // do my pixel analysis // Unlock the image buffer CVPixelBufferUnlockBaseAddress(imageBuffer,0); CFRelease(sampleBuffer); [self readNextMovieFrame]; } else { NSLog(@"could not copy next sample buffer. status is %d", _movieReader.status); NSTimeInterval scanDuration = -[startTime timeIntervalSinceNow]; float scanMultiplier = videoDuration / scanDuration; NSString* info = [NSString stringWithFormat:@"Done\n\nvideo duration: %f seconds\nscan duration: %f seconds\nmultiplier: %f", videoDuration, scanDuration, scanMultiplier]; [self performSelectorOnMainThread:@selector(updateInfo:) withObject:info waitUntilDone:YES]; } } else { NSLog(@"status is now %d", _movieReader.status); } } - (void) updateInfo: (id*)message { NSString* info = [NSString stringWithFormat:@"%@", message]; [infoTextView setText:info]; } 

如果你想要不太准确的帧处理(不是逐帧),你应该使用AVAssetImageGenerator

这个类返回一个你问的指定时间的帧。

具体来说,build立一个数组,填充的时间间隔0.5秒的时间间隔(如果你想每15帧约30帧每秒约29.3 fps的iPhone电影),并让图像生成器返回您的帧。

对于每一帧你可以看到你请求的时间和帧的实际时间。 它的默认值是从你问的时候开始的大约0.5s的容差,但是你也可以通过改变属性来改变它:

requestedTimeToleranceBeforerequestedTimeToleranceAfter

我希望我回答你的问题,祝你好运。