如何在iOS中将kCVPixelFormatType_420YpCbCr8BiPlanarFullRange缓冲区转换为UIImage

我试图在原来的线程中回答这个问题,但是SO不会让我。 希望有更多权威的人可以把这个问题合并到原来的问题中去。

好的,这里是一个更完整的答案。 首先,设置捕获:

// Create capture session self.captureSession = [[AVCaptureSession alloc] init]; [self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto]; // Setup capture input self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice error:nil]; [self.captureSession addInput:captureInput]; // Setup video processing (capture output) AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init]; // Don't add frames to the queue if frames are already processing captureOutput.alwaysDiscardsLateVideoFrames = YES; // Create a serial queue to handle processing of frames _videoQueue = dispatch_queue_create("cameraQueue", NULL); [captureOutput setSampleBufferDelegate:self queue:_videoQueue]; // Set the video output to store frame in YUV NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]; NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; [captureOutput setVideoSettings:videoSettings]; [self.captureSession addOutput:captureOutput]; 

现在确定代理/callback的实现:

 - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Create autorelease pool because we are not in the main_queue @autoreleasepool { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); //Lock the imagebuffer CVPixelBufferLockBaseAddress(imageBuffer,0); // Get information about the image uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); // size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress; // This just moved the pointer past the offset baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // convert the image _prefImageView.image = [self makeUIImage:baseAddress bufferInfo:bufferInfo width:width height:height bytesPerRow:bytesPerRow]; // Update the display with the captured image for DEBUG purposes dispatch_async(dispatch_get_main_queue(), ^{ [_myMainView.yUVImage setImage:_prefImageView.image]; }); } 

最后这里是从YUV转换为UIImage的方法

 - (UIImage *)makeUIImage:(uint8_t *)inBaseAddress bufferInfo:(CVPlanarPixelBufferInfo_YCbCrBiPlanar *)inBufferInfo width:(size_t)inWidth height:(size_t)inHeight bytesPerRow:(size_t)inBytesPerRow { NSUInteger yPitch = EndianU32_BtoN(inBufferInfo->componentInfoY.rowBytes); uint8_t *rgbBuffer = (uint8_t *)malloc(inWidth * inHeight * 4); uint8_t *yBuffer = (uint8_t *)inBaseAddress; uint8_t val; int bytesPerPixel = 4; // for each byte in the input buffer, fill in the output buffer with four bytes // the first byte is the Alpha channel, then the next three contain the same // value of the input buffer for(int y = 0; y < inHeight*inWidth; y++) { val = yBuffer[y]; // Alpha channel rgbBuffer[(y*bytesPerPixel)] = 0xff; // next three bytes same as input rgbBuffer[(y*bytesPerPixel)+1] = rgbBuffer[(y*bytesPerPixel)+2] = rgbBuffer[y*bytesPerPixel+3] = val; } // Create a device-dependent RGB color space CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(rgbBuffer, yPitch, inHeight, 8, yPitch*bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast); CGImageRef quartzImage = CGBitmapContextCreateImage(context); CGContextRelease(context); CGColorSpaceRelease(colorSpace); UIImage *image = [UIImage imageWithCGImage:quartzImage]; CGImageRelease(quartzImage); free(rgbBuffer); return image; } 

你还需要#import "Endian.h"

请注意,对CGBitmapContextCreate的调用比我想象的要复杂得多。 我对video处理并不是很了解,但是这个电话让我困惑了一会儿。 然后,当它终于工作,就像魔术。

背景信息:@ Michaelg的版本只访问y缓冲区,所以你只能得到亮度而不是颜色。 如果缓冲区中的音高和像素数量不匹配,则会出现缓冲区溢出错误(无论出于何种原因,在行末尾填充字节)。 这里发生的背景是这是一种平面图像格式,其为亮度分配每个像素一个字节,为彩色信息分配每4个像素2个字节。 不是连续存储在内存中,而是存储为“平面”,其中Y或亮度平面具有其自己的存储器块,并且CbCr或颜色平面也具有其自己的存储器块。 CbCr平面由Y平面的样本数量(半高和宽度)的1/4组成,并且CbCr平面中的每个像素对应于Y平面中的2×2块。 希望这个背景有帮助。

编辑:他的版本和我的旧版本都有可能超过缓冲区,并且如果图像缓冲区中的行在每行末尾有填充字节,将无法工作。 此外,我的cbcr平面缓冲区没有创build正确的偏移量。 为了做到这一点,你应该总是使用核心videofunction,如CVPixelBufferGetWidthOfPlane和CVPixelBufferGetBaseAddressOfPlane。 这将确保您正确地解释缓冲区,无论缓冲区是否有标题,以及是否使用指针math,它都可以正常工作。 您应该使用Apple函数的行大小和函数的缓冲区基地址。 这些logging在: https : //developer.apple.com/library/prerelease/ios/documentation/QuartzCore/Reference/CVPixelBufferRef/index.html请注意,虽然这个版本在这里使一些使用苹果的function和一些使用标题最好只使用苹果的function。 我可能会在将来更新这个,根本不使用头文件。

这将把一个kcvpixelformattype_420ypcbcr8biplanarfullrange缓冲区缓冲区转换成你可以使用的UIImage。

首先,设置捕获:

 // Create capture session self.captureSession = [[AVCaptureSession alloc] init]; [self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto]; // Setup capture input self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice error:nil]; [self.captureSession addInput:captureInput]; // Setup video processing (capture output) AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init]; // Don't add frames to the queue if frames are already processing captureOutput.alwaysDiscardsLateVideoFrames = YES; // Create a serial queue to handle processing of frames _videoQueue = dispatch_queue_create("cameraQueue", NULL); [captureOutput setSampleBufferDelegate:self queue:_videoQueue]; // Set the video output to store frame in YUV NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]; NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; [captureOutput setVideoSettings:videoSettings]; [self.captureSession addOutput:captureOutput]; 

现在确定代理/callback的实现:

 - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Create autorelease pool because we are not in the main_queue @autoreleasepool { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); //Lock the imagebuffer CVPixelBufferLockBaseAddress(imageBuffer,0); // Get information about the image uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); // size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress; //get the cbrbuffer base address uint8_t* cbrBuff = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1); // This just moved the pointer past the offset baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // convert the image _prefImageView.image = [self makeUIImage:baseAddress cBCrBuffer:cbrBuff bufferInfo:bufferInfo width:width height:height bytesPerRow:bytesPerRow]; // Update the display with the captured image for DEBUG purposes dispatch_async(dispatch_get_main_queue(), ^{ [_myMainView.yUVImage setImage:_prefImageView.image]; }); } 

最后这里是从YUV转换为UIImage的方法

 - (UIImage *)makeUIImage:(uint8_t *)inBaseAddress cBCrBuffer:(uint8_t*)cbCrBuffer bufferInfo:(CVPlanarPixelBufferInfo_YCbCrBiPlanar *)inBufferInfo width:(size_t)inWidth height:(size_t)inHeight bytesPerRow:(size_t)inBytesPerRow { NSUInteger yPitch = EndianU32_BtoN(inBufferInfo->componentInfoY.rowBytes); NSUInteger cbCrOffset = EndianU32_BtoN(inBufferInfo->componentInfoCbCr.offset); uint8_t *rgbBuffer = (uint8_t *)malloc(inWidth * inHeight * 4); NSUInteger cbCrPitch = EndianU32_BtoN(inBufferInfo->componentInfoCbCr.rowBytes); uint8_t *yBuffer = (uint8_t *)inBaseAddress; //uint8_t *cbCrBuffer = inBaseAddress + cbCrOffset; uint8_t val; int bytesPerPixel = 4; for(int y = 0; y < inHeight; y++) { uint8_t *rgbBufferLine = &rgbBuffer[y * inWidth * bytesPerPixel]; uint8_t *yBufferLine = &yBuffer[y * yPitch]; uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch]; for(int x = 0; x < inWidth; x++) { int16_t y = yBufferLine[x]; int16_t cb = cbCrBufferLine[x & ~1] - 128; int16_t cr = cbCrBufferLine[x | 1] - 128; uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel]; int16_t r = (int16_t)roundf( y + cr * 1.4 ); int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 ); int16_t b = (int16_t)roundf( y + cb * 1.765); //ABGR rgbOutput[0] = 0xff; rgbOutput[1] = clamp(b); rgbOutput[2] = clamp(g); rgbOutput[3] = clamp(r); } } // Create a device-dependent RGB color space CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); NSLog(@"ypitch:%lu inHeight:%zu bytesPerPixel:%d",(unsigned long)yPitch,inHeight,bytesPerPixel); NSLog(@"cbcrPitch:%lu",cbCrPitch); CGContextRef context = CGBitmapContextCreate(rgbBuffer, inWidth, inHeight, 8, inWidth*bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast); CGImageRef quartzImage = CGBitmapContextCreateImage(context); CGContextRelease(context); CGColorSpaceRelease(colorSpace); UIImage *image = [UIImage imageWithCGImage:quartzImage]; CGImageRelease(quartzImage); free(rgbBuffer); return image; } 

你还需要#import "Endian.h"和define #define clamp(a) (a>255?255:(a<0?0:a));

请注意,对CGBitmapContextCreate的调用比我想象的要复杂得多。 我对video处理并不是很了解,但是这个电话让我困惑了一会儿。 然后,当它终于工作,就像魔术。