从CMSampleBuffer创build一个UIImage
这与关于将CMSampleBuffer
转换为UIImage
的无数问题并不相同。 我只是想知道为什么我不能像这样转换它:
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer); CIImage * imageFromCoreImageLibrary = [CIImage imageWithCVPixelBuffer: pixelBuffer]; UIImage * imageForUI = [UIImage imageWithCIImage: imageFromCoreImageLibrary];
这似乎更简单,因为它适用于YCbCr色彩空间,以及RGBA和其他。 这个代码有什么问题吗?
使用Swift 3和iOS 10 AVCapturePhotoOutput:包括:
import UIKit import CoreData import CoreMotion import AVFoundation
创build一个UIView预览并将其链接到主类
@IBOutlet var preview: UIView!
创build此设置相机会话( kCVPixelFormatType_32BGRA是重要的!):
lazy var cameraSession: AVCaptureSession = { let s = AVCaptureSession() s.sessionPreset = AVCaptureSessionPresetHigh return s }() lazy var previewLayer: AVCaptureVideoPreviewLayer = { let previewl:AVCaptureVideoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.cameraSession) previewl.frame = self.preview.bounds return previewl }() func setupCameraSession() { let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo) as AVCaptureDevice do { let deviceInput = try AVCaptureDeviceInput(device: captureDevice) cameraSession.beginConfiguration() if (cameraSession.canAddInput(deviceInput) == true) { cameraSession.addInput(deviceInput) } let dataOutput = AVCaptureVideoDataOutput() dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: **kCVPixelFormatType_32BGRA** as UInt32)] dataOutput.alwaysDiscardsLateVideoFrames = true if (cameraSession.canAddOutput(dataOutput) == true) { cameraSession.addOutput(dataOutput) } cameraSession.commitConfiguration() let queue = DispatchQueue(label: "fr.popigny.videoQueue", attributes: []) dataOutput.setSampleBufferDelegate(self, queue: queue) } catch let error as NSError { NSLog("\(error), \(error.localizedDescription)") } }
在WillAppear:
override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) setupCameraSession() }
在Didappear:
override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) preview.layer.addSublayer(previewLayer) cameraSession.startRunning() }
创build一个捕获输出的函数:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { // Here you collect each frame and process it let ts:CMTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) self.mycapturedimage = imageFromSampleBuffer(sampleBuffer: sampleBuffer) }
以下是将kCVPixelFormatType_32BGRA CMSampleBuffer转换为UIImage的代码,关键是必须对应于32BGRA的bitmapInfo 32,使用premultfirst和alpha info:
func imageFromSampleBuffer(sampleBuffer : CMSampleBuffer) -> UIImage { // Get a CMSampleBuffer's Core Video image buffer for the media data let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // Lock the base address of the pixel buffer CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly); // Get the number of bytes per row for the pixel buffer let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer!); // Get the number of bytes per row for the pixel buffer let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!); // Get the pixel buffer width and height let width = CVPixelBufferGetWidth(imageBuffer!); let height = CVPixelBufferGetHeight(imageBuffer!); // Create a device-dependent RGB color space let colorSpace = CGColorSpaceCreateDeviceRGB(); // Create a bitmap graphics context with the sample buffer data var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue //let bitmapInfo: UInt32 = CGBitmapInfo.alphaInfoMask.rawValue let context = CGContext.init(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) // Create a Quartz image from the pixel data in the bitmap graphics context let quartzImage = context?.makeImage(); // Unlock the pixel buffer CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly); // Create an image object from the Quartz image let image = UIImage.init(cgImage: quartzImage!); return (image); }
迅速:
let buff: CMSampleBuffer ... // Have you have CMSampleBuffer let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(buff); let image = UIImage(data: imageData) // Here you have UIImage
使用以下代码从PixelBuffer转换图像选项1:
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; CIContext *context = [CIContext contextWithOptions:nil]; CGImageRef myImage = [context createCGImage:ciImage fromRect:CGRectMake(0, 0, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer))]; UIImage *uiImage = [UIImage imageWithCGImage:myImage];
选项2:
int w = CVPixelBufferGetWidth(pixelBuffer); int h = CVPixelBufferGetHeight(pixelBuffer); int r = CVPixelBufferGetBytesPerRow(pixelBuffer); int bytesPerPixel = r/w; unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer); UIGraphicsBeginImageContext(CGSizeMake(w, h)); CGContextRef c = UIGraphicsGetCurrentContext(); unsigned char* data = CGBitmapContextGetData(c); if (data != NULL) { int maxY = h; for(int y = 0; y<maxY; y++) { for(int x = 0; x<w; x++) { int offset = bytesPerPixel*((w*y)+x); data[offset] = buffer[offset]; // R data[offset+1] = buffer[offset+1]; // G data[offset+2] = buffer[offset+2]; // B data[offset+3] = buffer[offset+3]; // A } } } UIImage *img = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext();
我写了一个简单的扩展用于Swift 4.x / 3.x从一个CMSampleBuffer
产生一个UIImage
。
这也可以处理缩放和方向,尽pipe你可以接受默认值,如果他们为你工作。
import UIKit import AVFoundation extension CMSampleBuffer { func image(orientation: UIImageOrientation = .up, scale: CGFloat = 1.0) -> UIImage? { if let buffer = CMSampleBufferGetImageBuffer(self) { let ciImage = CIImage(cvPixelBuffer: buffer) return UIImage(ciImage: ciImage, scale: scale, orientation: orientation) } return nil } }
- 如果可以从图像中获取缓冲区数据,则会继续,否则返回nil
- 使用缓冲区,它初始化一个
CIImage
- 它返回一个用
ciImage
值初始化的UIImage
,以及scale
和orientation
值。 如果没有提供,则分别使用up
和1.0
的默认值
这将与iOS 10 AVCapturePhotoOutput类有关。 假设用户想要拍摄一张照片,并且您调用了capturePhoto(with:delegate:)
并且您的设置包含对预览图像的请求。 这是一个非常有效的方式来获得预览图像,但你将如何显示在你的界面? 预览图像在实现委托方法时作为CMSampleBuffer到达:
func capture(_ output: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer buff: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
您需要将CMSampleBuffer, previewPhotoSampleBuffer
转换为UIImage。 你打算怎么做? 喜欢这个:
if let prev = previewPhotoSampleBuffer { if let buff = CMSampleBufferGetImageBuffer(prev) { let cim = CIImage(cvPixelBuffer: buff) let im = UIImage(ciImage: cim) // and now you have a UIImage! do something with it ... } }