kCVPixelFormatType_420YpCbCr8BiPlanarFullRange帧到UIImage转换

我有一个应用程序捕捉实时videokCVPixelFormatType_420YpCbCr8BiPlanarFullRange格式来处理Y频道。 根据苹果的文件:

kCVPixelFormatType_420YpCbCr8BiPlanarFullRange双平面分量Y'CbCr 8位4:2:0,全范围(亮度= [0,255]色度= [1,255])。 baseAddr指向一个大端的CVPlanarPixelBufferInfo_YCbCrBiPlanar结构。

我想在UIViewController中展示一些这些框架,是否有任何API来转换为kCVPixelFormatType_32BGRA格式? 你能否提供一些提示来调整这个由苹果提供的方法?

// Create a UIImage from sample buffer data - (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer { // Get a CMSampleBuffer's Core Video image buffer for the media data CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // Lock the base address of the pixel buffer CVPixelBufferLockBaseAddress(imageBuffer, 0); // Get the number of bytes per row for the pixel buffer void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); // Get the number of bytes per row for the pixel buffer size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); // Get the pixel buffer width and height size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); // Create a device-dependent RGB color space CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // Create a bitmap graphics context with the sample buffer data CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); // Create a Quartz image from the pixel data in the bitmap graphics context CGImageRef quartzImage = CGBitmapContextCreateImage(context); // Unlock the pixel buffer CVPixelBufferUnlockBaseAddress(imageBuffer,0); // Free up the context and color space CGContextRelease(context); CGColorSpaceRelease(colorSpace); // Create an image object from the Quartz image UIImage *image = [UIImage imageWithCGImage:quartzImage]; // Release the Quartz image CGImageRelease(quartzImage); return (image); } 

谢谢!

我不知道有任何可访问的内置方式在iOS中将双平面Y / CbCr图像转换为RGB。 但是,您应该可以使用软件自行执行转换,例如

 uint8_t clamp(int16_t input) { // clamp negative numbers to 0; assumes signed shifts // (a valid assumption on iOS) input &= ~(num >> 16); // clamp numbers greater than 255 to 255; the accumulation // of the mask looks odd but is an attempt to avoid // pipeline stalls uint8_t saturationMask = num >> 8; saturationMask |= saturationMask << 4; saturationMask |= saturationMask << 2; saturationMask |= saturationMask << 1; num |= saturationMask; return num&0xff; } ... CVPixelBufferLockBaseAddress(imageBuffer, 0); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); uint8_t *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress; NSUInteger yOffset = EndianU32_BtoN(bufferInfo->componentInfoY.offset); NSUInteger yPitch = EndianU32_BtoN(bufferInfo->componentInfoY.rowBytes); NSUInteger cbCrOffset = EndianU32_BtoN(bufferInfo->componentInfoCbCr.offset); NSUInteger cbCrPitch = EndianU32_BtoN(bufferInfo->componentInfoCbCr.rowBytes); uint8_t *rgbBuffer = malloc(width * height * 3); uint8_t *yBuffer = baseAddress + yOffset; uint8_t *cbCrBuffer = baseAddress + cbCrOffset; for(int y = 0; y < height; y++) { uint8_t *rgbBufferLine = &rgbBuffer[y * width * 3]; uint8_t *yBufferLine = &yBuffer[y * yPitch]; uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch]; for(int x = 0; x < width; x++) { // from ITU-R BT.601, rounded to integers uint8_t y = yBufferLine[x] - 16; uint8_t cb = cbCrBufferLine[x & ~1] - 128; uint8_t cr = cbCrBufferLine[x | 1] - 128; uint8_t *rgbOutput = &rgbBufferLine[x*3]; rgbOutput[0] = clamp(((298 * y + 409 * cr - 223) >> 8) - 223); rgbOutput[1] = clamp(((298 * y - 100 * cb - 208 * cr + 136) >> 8) + 136); rgbOutput[2] = clamp(((298 * y + 516 * cb - 277) >> 8) - 277); } } 

只是直接写入这个盒子,并没有经过testing,我想我已经得到正确的CB / CR提取。 然后你可以使用CGBitmapContextCreatergbBuffer创build一个CGImage ,从而创build一个UIImage

我发现的大多数实现(包括以前的答案)不会工作,如果您更改AVCaptureConnection videoOrientation (由于某些原因,我不完全了解,在这种情况下CVPlanarPixelBufferInfo_YCbCrBiPlanar结构将是空的),所以我写了一个(大部分的代码是基于这个答案 )。 我的实现还为RGB缓冲区添加了一个空的alpha通道,并使用kCGImageAlphaNoneSkipLast标志创buildCGBitmapContext (没有alpha数据,但是iOS似乎需要每个像素4个字节)。 这里是:

 #define clamp(a) (a>255?255:(a<0?0:a)) - (UIImage *)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(imageBuffer,0); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); uint8_t *yBuffer = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); size_t yPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0); uint8_t *cbCrBuffer = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1); size_t cbCrPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1); int bytesPerPixel = 4; uint8_t *rgbBuffer = malloc(width * height * bytesPerPixel); for(int y = 0; y < height; y++) { uint8_t *rgbBufferLine = &rgbBuffer[y * width * bytesPerPixel]; uint8_t *yBufferLine = &yBuffer[y * yPitch]; uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch]; for(int x = 0; x < width; x++) { int16_t y = yBufferLine[x]; int16_t cb = cbCrBufferLine[x & ~1] - 128; int16_t cr = cbCrBufferLine[x | 1] - 128; uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel]; int16_t r = (int16_t)roundf( y + cr * 1.4 ); int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 ); int16_t b = (int16_t)roundf( y + cb * 1.765); rgbOutput[0] = 0xff; rgbOutput[1] = clamp(b); rgbOutput[2] = clamp(g); rgbOutput[3] = clamp(r); } } CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(rgbBuffer, width, height, 8, width * bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast); CGImageRef quartzImage = CGBitmapContextCreateImage(context); UIImage *image = [UIImage imageWithCGImage:quartzImage]; CGContextRelease(context); CGColorSpaceRelease(colorSpace); CGImageRelease(quartzImage); free(rgbBuffer); CVPixelBufferUnlockBaseAddress(imageBuffer, 0); return image; }