将UIImage转换为CMSampleBufferRef

我正在使用AVFoundation进行video录制。 我必须裁剪video到320×280。 我得到的CMSampleBufferRef并使用下面的代码将其转换为UIImage。

CGImageRef _cgImage = [self imageFromSampleBuffer:sampleBuffer]; UIImage *_uiImage = [UIImage imageWithCGImage:_cgImage]; CGImageRelease(_cgImage); _uiImage = [_uiImage resizedImageWithSize:CGSizeMake(320, 280)]; CMSampleBufferRef croppedBuffer = /* NEED HELP WITH THIS */ [_videoInput appendSampleBuffer:sampleBuffer]; // _videoInput is a AVAssetWriterInput 

imageFromSampleBuffer:方法如下所示:

 - (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGImageRef newImage = CGBitmapContextCreateImage(newContext); CGContextRelease(newContext); CGColorSpaceRelease(colorSpace); CVPixelBufferUnlockBaseAddress(imageBuffer,0); /* CVBufferRelease(imageBuffer); */ // do not call this! return newImage; } 

现在我必须将大小调整后的图像转换回CMSampleBufferRef以写入AVAssetWriterInput

如何将UIImage转换为CMSampleBufferRef

感谢大家!

虽然您可以从头创build自己的Core Media样本缓冲区,但使用AVPixelBufferAdaptor可能更容易。

您可以在inputSettings字典中描述源像素缓冲区格式,并将其传递给适配器初始值设定项:

 NSMutableDictionary* inputSettingsDict = [NSMutableDictionary dictionary]; [inputSettingsDict setObject:[NSNumber numberWithInt:pixelFormat] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey]; [inputSettingsDict setObject:[NSNumber numberWithUnsignedInteger:(NSUInteger)(image.uncompressedSize/image.rect.size.height)] forKey:(NSString*)kCVPixelBufferBytesPerRowAlignmentKey]; [inputSettingsDict setObject:[NSNumber numberWithDouble:image.rect.size.width] forKey:(NSString*)kCVPixelBufferWidthKey]; [inputSettingsDict setObject:[NSNumber numberWithDouble:image.rect.size.height] forKey:(NSString*)kCVPixelBufferHeightKey]; [inputSettingsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString*)kCVPixelBufferCGImageCompatibilityKey]; [inputSettingsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey]; AVAssetWriterInputPixelBufferAdaptor* pixelBufferAdapter = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:assetWriterInput sourcePixelBufferAttributes:inputSettingsDict]; 

然后,您可以将CVPixelBuffers附加到您的适配器上:

 [pixelBufferAdapter appendPixelBuffer:completePixelBuffer withPresentationTime:pixelBufferTime] 

pixelbufferAdaptor接受CVPixelBuffers,所以你必须将你的UIImage转换为像素缓冲区,这在这里描述: https ://stackoverflow.com/a/3742212/100848
UIImageCGImage属性传递给newPixelBufferFromCGImage

这是一个函数,我在我的GPUImage框架中调整传入的CMSampleBufferRef的大小,并将缩放结果放在您提供的CVPixelBufferRef中:

 void GPUImageCreateResizedSampleBuffer(CVPixelBufferRef cameraFrame, CGSize finalSize, CMSampleBufferRef *sampleBuffer) { // CVPixelBufferCreateWithPlanarBytes for YUV input CGSize originalSize = CGSizeMake(CVPixelBufferGetWidth(cameraFrame), CVPixelBufferGetHeight(cameraFrame)); CVPixelBufferLockBaseAddress(cameraFrame, 0); GLubyte *sourceImageBytes = CVPixelBufferGetBaseAddress(cameraFrame); CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, sourceImageBytes, CVPixelBufferGetBytesPerRow(cameraFrame) * originalSize.height, NULL); CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB(); CGImageRef cgImageFromBytes = CGImageCreate((int)originalSize.width, (int)originalSize.height, 8, 32, CVPixelBufferGetBytesPerRow(cameraFrame), genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, dataProvider, NULL, NO, kCGRenderingIntentDefault); GLubyte *imageData = (GLubyte *) calloc(1, (int)finalSize.width * (int)finalSize.height * 4); CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)finalSize.width, (int)finalSize.height, 8, (int)finalSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, finalSize.width, finalSize.height), cgImageFromBytes); CGImageRelease(cgImageFromBytes); CGContextRelease(imageContext); CGColorSpaceRelease(genericRGBColorspace); CGDataProviderRelease(dataProvider); CVPixelBufferRef pixel_buffer = NULL; CVPixelBufferCreateWithBytes(kCFAllocatorDefault, finalSize.width, finalSize.height, kCVPixelFormatType_32BGRA, imageData, finalSize.width * 4, stillImageDataReleaseCallback, NULL, NULL, &pixel_buffer); CMVideoFormatDescriptionRef videoInfo = NULL; CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixel_buffer, &videoInfo); CMTime frameTime = CMTimeMake(1, 30); CMSampleTimingInfo timing = {frameTime, frameTime, kCMTimeInvalid}; CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixel_buffer, YES, NULL, NULL, videoInfo, &timing, sampleBuffer); CFRelease(videoInfo); CVPixelBufferRelease(pixel_buffer); } 

创buildCMSampleBufferRef并不需要所有的方法,但是正如weichsel指出的那样,您只需要CVPixelBufferRef就可以对video进行编码。

然而,如果你真的想在这里做的是裁剪video和logging,来回UIImage将是一个非常缓慢的方式来做到这一点。 相反,我可以推荐使用像GPUImage这样的工具来捕捉video,使用GPUImageVideoCamerainput捕获video(或者GPUImageMovie,如果裁剪以前logging的电影),将其馈送到GPUImageCropFilter,然后将结果提供给GPUImageMovieWriter。 这样,video永远不会触及Core Graphics,并尽可能使用硬件加速。 这将比上面描述的要快得多。

 - (CVPixelBufferRef)CVPixelBufferRefFromUiImage:(UIImage *)img { CGSize size = img.size; CGImageRef image = [img CGImage]; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil]; CVPixelBufferRef pxbuffer = NULL; CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options, &pxbuffer); NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL); CVPixelBufferLockBaseAddress(pxbuffer, 0); void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer); NSParameterAssert(pxdata != NULL); CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst); NSParameterAssert(context); CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image); CGColorSpaceRelease(rgbColorSpace); CGContextRelease(context); CVPixelBufferUnlockBaseAddress(pxbuffer, 0); return pxbuffer; }