iOS – 缩放和裁剪CMSampleBufferRef / CVImageBufferRef

我正在使用AVFoundation并从AVCaptureVideoDataOutput获取示例缓冲区,我可以直接写入到videoWriter使用:

 - (void)writeBufferFrame:(CMSampleBufferRef)sampleBuffer { CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer); if(self.videoWriter.status != AVAssetWriterStatusWriting) { [self.videoWriter startWriting]; [self.videoWriter startSessionAtSourceTime:lastSampleTime]; } [self.videoWriterInput appendSampleBuffer:sampleBuffer]; } 

我现在要做的是在CMSampleBufferRef中裁剪和缩放图像,而不将其转换为UIImage或CGImageRef,因为这会降低性能。

如果您使用vimage,您可以直接在缓冲区数据上工作,而不必将其转换为任何图像格式。

outImg包含裁剪和缩放的图像数据。 outWidth和cropWidth之间的关系设置缩放。 vimage种植

 int cropX0, cropY0, cropHeight, cropWidth, outWidth, outHeight; CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(imageBuffer,0); void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); vImage_Buffer inBuff; inBuff.height = cropHeight; inBuff.width = cropWidth; inBuff.rowBytes = bytesPerRow; int startpos = cropY0*bytesPerRow+4*cropX0; inBuff.data = baseAddress+startpos; unsigned char *outImg= (unsigned char*)malloc(4*outWidth*outHeight); vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4*outWidth}; vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0); if (err != kvImageNoError) NSLog(@" error %ld", err); 

因此,将cropX0 = 0和cropY0 = 0,cropWidth和cropHeight设置为原始大小意味着不裁剪(使用整个原始图像)。 设置outWidth = cropWidth和outHeight = cropHeight不会导致缩放。 请注意,inBuff.rowBytes应始终为完整源缓冲区的长度,而不是裁剪的长度。

你可能会考虑使用CoreImage(5.0+)。

 CIImage *ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer) options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNull null], kCIImageColorSpace, nil]]; ciImage = [[ciImage imageByApplyingTransform:myScaleTransform] imageByCroppingToRect:myRect]; 

注意

我最近需要再次编写这个函数,并发现这个方法似乎没有工作了(至less我不能让它在iOS 10.3.1上工作)。 输出图像不会alignment。 我猜这是因为bytesPerRow错误。


原始答复

缓冲区只是一个像素数组,所以你可以直接处理缓冲区而不使用vImage。 代码是用Swift编写的,但我认为很容易findObjective-C的等价物。

Swift 3

 let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)! CVPixelBufferLockBaseAddress(imageBuffer, .readOnly) let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer) let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer) let cropWidth = 640 let cropHeight = 640 let colorSpace = CGColorSpaceCreateDeviceRGB() let context = CGContext(data: baseAddress, width: cropWidth, height: cropHeight, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue) CVPixelBufferUnlockBaseAddress(imageBuffer, .readOnly) // create image let cgImage: CGImage = context!.makeImage()! let image = UIImage(cgImage: cgImage) 

Swift 2

 let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)! CVPixelBufferLockBaseAddress(imageBuffer, 0) let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer) let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer) let cropWidth = 640 let cropHeight = 640 let colorSpace = CGColorSpaceCreateDeviceRGB() let context = CGBitmapContextCreate(baseAddress, cropWidth, cropHeight, 8, bytesPerRow, colorSpace, CGImageAlphaInfo.NoneSkipFirst.rawValue | CGBitmapInfo.ByteOrder32Little.rawValue) // create image let cgImage: CGImageRef = CGBitmapContextCreateImage(context)! let image = UIImage(CGImage: cgImage) 

如果您想从某个特定位置裁剪,请添加以下代码:

 // calculate start position let bytesPerPixel = 4 let startPoint = [ "x": 10, "y": 10 ] let startAddress = baseAddress + startPoint["y"]! * bytesPerRow + startPoint["x"]! * bytesPerPixel 

并将baseAddress中的CGBitmapContextCreate更改为startAddress 。 确保不要超过原图像的宽度和高度。

对于缩放,你可以让AVFoundation为你做这个。 看到我最近的post在这里 。 设置AVVideoWidth / AVVideoHeight键的值将会缩放不同尺寸的图像。 看看这里的属性。至于裁剪,我不知道你是否可以让AVFoundation为你做这个。 您可能不得不诉诸使用OpenGL或CoreImage。 在这个SO问题的顶部post中有几个好链接。

在Swift3上试试这个

 func resize(_ destSize: CGSize)-> CVPixelBuffer? { guard let imageBuffer = CMSampleBufferGetImageBuffer(self) else { return nil } // Lock the image buffer CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0)) // Get information about the image let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer) let bytesPerRow = CGFloat(CVPixelBufferGetBytesPerRow(imageBuffer)) let height = CGFloat(CVPixelBufferGetHeight(imageBuffer)) let width = CGFloat(CVPixelBufferGetWidth(imageBuffer)) var pixelBuffer: CVPixelBuffer? let options = [kCVPixelBufferCGImageCompatibilityKey:true, kCVPixelBufferCGBitmapContextCompatibilityKey:true] let topMargin = (height - destSize.height) / CGFloat(2) let leftMargin = (width - destSize.width) * CGFloat(2) let baseAddressStart = Int(bytesPerRow * topMargin + leftMargin) let addressPoint = baseAddress!.assumingMemoryBound(to: UInt8.self) let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(destSize.width), Int(destSize.height), kCVPixelFormatType_32BGRA, &addressPoint[baseAddressStart], Int(bytesPerRow), nil, nil, options as CFDictionary, &pixelBuffer) if (status != 0) { print(status) return nil; } CVPixelBufferUnlockBaseAddress(imageBuffer,CVPixelBufferLockFlags(rawValue: 0)) return pixelBuffer; }