在iOS中的AVCaptureDevice输出上设置GrayScale

我想在我的应用程序中实现自定义相机。 所以,我正在使用AVCaptureDevice创建这个相机。

现在我只想在我的自定义相机中显示灰度输出。 所以我想尝试使用setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:AVCaptureWhiteBalanceGains 。 我正在使用AVCamManual:将AVCam扩展为使用手动捕获 。

 - (void)setWhiteBalanceGains:(AVCaptureWhiteBalanceGains)gains { NSError *error = nil; if ( [videoDevice lockForConfiguration:&error] ) { AVCaptureWhiteBalanceGains normalizedGains = [self normalizedGains:gains]; // Conversion can yield out-of-bound values, cap to limits [videoDevice setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:normalizedGains completionHandler:nil]; [videoDevice unlockForConfiguration]; } else { NSLog( @"Could not lock device for configuration: %@", error ); } } 

但为此,我必须将RGB增益值传递到1到4之间。所以我创建了这个方法来检查MAX和MIN值。

 - (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains { AVCaptureWhiteBalanceGains g = gains; g.redGain = MAX( 1.0, g.redGain ); g.greenGain = MAX( 1.0, g.greenGain ); g.blueGain = MAX( 1.0, g.blueGain ); g.redGain = MIN( videoDevice.maxWhiteBalanceGain, g.redGain ); g.greenGain = MIN( videoDevice.maxWhiteBalanceGain, g.greenGain ); g.blueGain = MIN( videoDevice.maxWhiteBalanceGain, g.blueGain ); return g; } 

此外,我试图获得不同的效果,如传递RGB增益静态值。

 - (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains { AVCaptureWhiteBalanceGains g = gains; g.redGain = 3; g.greenGain = 2; g.blueGain = 1; return g; } 

现在,我想在我的自定义相机上设置此灰度(公式:Pixel = 0.30078125f * R + 0.5859375f * G + 0.11328125f * B)。 我试过这个公式。

 - (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains { AVCaptureWhiteBalanceGains g = gains; g.redGain = g.redGain * 0.30078125; g.greenGain = g.greenGain * 0.5859375; g.blueGain = g.blueGain * 0.11328125; float grayScale = g.redGain + g.greenGain + g.blueGain; g.redGain = MAX( 1.0, grayScale ); g.greenGain = MAX( 1.0, grayScale ); g.blueGain = MAX( 1.0, grayScale ); g.redGain = MIN( videoDevice.maxWhiteBalanceGain, g.redGain ); g.greenGain = MIN( videoDevice.maxWhiteBalanceGain, g.greenGain); g.blueGain = MIN( videoDevice.maxWhiteBalanceGain, g.blueGain ); return g; } 

那么如何在1到4之间传递这个值 ..?

有没有办法或比例来比较这些东西……?

任何帮助,将不胜感激。

CoreImage提供了大量用于使用GPU调整图像的filter,可以有效地用于video数据,可以是相机输入或video文件。

有一篇关于objc.io的文章展示了如何做到这一点。 这些例子在Objective-C中,但解释应该足够清楚。

基本步骤是:

  1. 创建一个EAGLContext ,配置为使用OpenGLES2。
  2. 使用GLKView创建GLKView以显示呈现的输出。
  3. 使用相同的EAGLContext创建EAGLContext
  4. 使用CIColorMonochrome CoreImagefilter创建CIFilter 。
  5. 使用AVCaptureSession创建AVCaptureVideoDataOutput
  6. AVCaptureVideoDataOutputDelegate方法中,将CMSampleBuffer转换为CIImage 。 将CIFilter应用于图像。 将过滤后的图像绘制到CIImageContext

此管道确保video像素缓冲区保留在GPU上(从相机到显示器),并避免将数据移动到CPU,以保持实时性能。

要保存过滤的video,请实现AVAssetWriter ,并将样本缓冲区附加到完成过滤的AVCaptureVideoDataOutputDelegate中。

这是Swift中的一个例子。

GitHub上的示例 。

 import UIKit import GLKit import AVFoundation private let rotationTransform = CGAffineTransformMakeRotation(CGFloat(-M_PI * 0.5)) class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate { private var context: CIContext! private var targetRect: CGRect! private var session: AVCaptureSession! private var filter: CIFilter! @IBOutlet var glView: GLKView! override func prefersStatusBarHidden() -> Bool { return true } override func viewDidAppear(animated: Bool) { super.viewDidAppear(animated) let whiteColor = CIColor( red: 1.0, green: 1.0, blue: 1.0 ) filter = CIFilter( name: "CIColorMonochrome", withInputParameters: [ "inputColor" : whiteColor, "inputIntensity" : 1.0 ] ) // GL context let glContext = EAGLContext( API: .OpenGLES2 ) glView.context = glContext glView.enableSetNeedsDisplay = false context = CIContext( EAGLContext: glContext, options: [ kCIContextOutputColorSpace: NSNull(), kCIContextWorkingColorSpace: NSNull(), ] ) let screenSize = UIScreen.mainScreen().bounds.size let screenScale = UIScreen.mainScreen().scale targetRect = CGRect( x: 0, y: 0, width: screenSize.width * screenScale, height: screenSize.height * screenScale ) // Setup capture session. let cameraDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo) let videoInput = try? AVCaptureDeviceInput( device: cameraDevice ) let videoOutput = AVCaptureVideoDataOutput() videoOutput.setSampleBufferDelegate(self, queue: dispatch_get_main_queue()) session = AVCaptureSession() session.beginConfiguration() session.addInput(videoInput) session.addOutput(videoOutput) session.commitConfiguration() session.startRunning() } func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) { guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } let originalImage = CIImage( CVPixelBuffer: pixelBuffer, options: [ kCIImageColorSpace: NSNull() ] ) let rotatedImage = originalImage.imageByApplyingTransform(rotationTransform) filter.setValue(rotatedImage, forKey: kCIInputImageKey) guard let filteredImage = filter.outputImage else { return } context.drawImage(filteredImage, inRect: targetRect, fromRect: filteredImage.extent) glView.display() } func captureOutput(captureOutput: AVCaptureOutput!, didDropSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) { let seconds = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) print("dropped sample buffer: \(seconds)") } }