如何将CVPixelBuffer变成UIImage?
我有一些从CVPixelBuffer获取UIIMage的问题。 这就是我想要的:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer); CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, imageDataSampleBuffer, kCMAttachmentMode_ShouldPropagate); CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments]; if (attachments) CFRelease(attachments); size_t width = CVPixelBufferGetWidth(pixelBuffer); size_t height = CVPixelBufferGetHeight(pixelBuffer); if (width && height) { // test to make sure we have valid dimensions UIImage *image = [[UIImage alloc] initWithCIImage:ciImage]; UIImageView *lv = [[UIImageView alloc] initWithFrame:self.view.frame]; lv.contentMode = UIViewContentModeScaleAspectFill; self.lockedView = lv; [lv release]; self.lockedView.image = image; [image release]; } [ciImage release];
height
和width
均正确设置为相机的分辨率。 image
创build,但我似乎是黑色(或可能是透明的?)。 我不太明白问题在哪里。 任何想法,将不胜感激。
首先,与您的问题没有直接关系的显而易见的东西: AVCaptureVideoPreviewLayer
是最便宜的方式,从任何一台摄像机的video到独立的视图,如果这是数据来自哪里,你没有立即计划修改它。 您不必自己动手,预览层直接连接到AVCaptureSession
并自行更新。
我不得不承认对这个中心问题缺乏信心。 CIImage
与其他两种types的图像之间存在语义差异 – CIImage
是图像的配方,不一定由像素支持。 它可以像“从这里获取像素,像这样转换,应用这个滤镜,像这样转换,与其他图像合并,应用这个滤镜”。 在你select渲染之前,系统不知道CIImage
是什么样的。 它本身也不知道对其进行光栅化的适当界限。
UIImage
声称只是包装一个CIImage
。 它不会将其转换为像素。 大概UIImageView
应该实现,但如果是这样,我似乎无法find你在哪里提供适当的输出矩形。
我已经成功避免了这个问题:
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; CIContext *temporaryContext = [CIContext contextWithOptions:nil]; CGImageRef videoImage = [temporaryContext createCGImage:ciImage fromRect:CGRectMake(0, 0, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer))]; UIImage *uiImage = [UIImage imageWithCGImage:videoImage]; CGImageRelease(videoImage);
给出了明确的机会来指定输出矩形。 我确定没有使用CGImage
作为中介,所以请不要以为这个解决scheme是最佳实践。
另一种获得UIImage的方法。 至less在我的情况下,执行速度快了10倍:
int w = CVPixelBufferGetWidth(pixelBuffer); int h = CVPixelBufferGetHeight(pixelBuffer); int r = CVPixelBufferGetBytesPerRow(pixelBuffer); int bytesPerPixel = r/w; unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer); UIGraphicsBeginImageContext(CGSizeMake(w, h)); CGContextRef c = UIGraphicsGetCurrentContext(); unsigned char* data = CGBitmapContextGetData(c); if (data != NULL) { int maxY = h; for(int y = 0; y<maxY; y++) { for(int x = 0; x<w; x++) { int offset = bytesPerPixel*((w*y)+x); data[offset] = buffer[offset]; // R data[offset+1] = buffer[offset+1]; // G data[offset+2] = buffer[offset+2]; // B data[offset+3] = buffer[offset+3]; // A } } } UIImage *img = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext();
除非你的图像数据是一些不同的格式,需要swizzle或转换 – 我会build议不增加任何东西…只是用memcpy把数据砸到你的上下文内存区域,如:
//not here... unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer); UIGraphicsBeginImageContext(CGSizeMake(w, h)); CGContextRef c = UIGraphicsGetCurrentContext(); void *ctxData = CGBitmapContextGetData(c); // MUST READ-WRITE LOCK THE PIXEL BUFFER!!!! CVPixelBufferLockBaseAddress(pixelBuffer, 0); void *pxData = CVPixelBufferGetBaseAddress(pixelBuffer); memcpy(ctxData, pxData, 4 * w * h); CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); ... and so on...
以前的方法导致我有CG栅格数据泄漏。 这种转换方法对我来说没有泄漏:
@autoreleasepool { CGImageRef cgImage = NULL; OSStatus res = CreateCGImageFromCVPixelBuffer(pixelBuffer,&cgImage); if (res == noErr){ UIImage *image= [UIImage imageWithCGImage:cgImage scale:1.0 orientation:UIImageOrientationUp]; } CGImageRelease(cgImage); } static OSStatus CreateCGImageFromCVPixelBuffer(CVPixelBufferRef pixelBuffer, CGImageRef *imageOut) { OSStatus err = noErr; OSType sourcePixelFormat; size_t width, height, sourceRowBytes; void *sourceBaseAddr = NULL; CGBitmapInfo bitmapInfo; CGColorSpaceRef colorspace = NULL; CGDataProviderRef provider = NULL; CGImageRef image = NULL; sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer ); if ( kCVPixelFormatType_32ARGB == sourcePixelFormat ) bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst; else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat ) bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst; else return -95014; // only uncompressed pixel formats sourceRowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer ); width = CVPixelBufferGetWidth( pixelBuffer ); height = CVPixelBufferGetHeight( pixelBuffer ); CVPixelBufferLockBaseAddress( pixelBuffer, 0 ); sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer ); colorspace = CGColorSpaceCreateDeviceRGB(); CVPixelBufferRetain( pixelBuffer ); provider = CGDataProviderCreateWithData( (void *)pixelBuffer, sourceBaseAddr, sourceRowBytes * height, ReleaseCVPixelBuffer); image = CGImageCreate(width, height, 8, 32, sourceRowBytes, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault); if ( err && image ) { CGImageRelease( image ); image = NULL; } if ( provider ) CGDataProviderRelease( provider ); if ( colorspace ) CGColorSpaceRelease( colorspace ); *imageOut = image; return err; } static void ReleaseCVPixelBuffer(void *pixel, const void *data, size_t size) { CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)pixel; CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 ); CVPixelBufferRelease( pixelBuffer ); }