有没有办法获得照相机的streamiOS上的亮度水平?

我正在使用iPhone / iPad摄像头获取videostream,并在stream上进行识别,但是随着灯光的变化,它对稳健性有负面影响。 我已经testing了不同的设置在不同的灯光,并可以得到它的工作,但试图让设置在运行时调整是我所需要的。

我可以计算每一帧的简单亮度检查,但相机会调整并抛出我的结果。 我可以观察到剧烈的变化,然后运行检查,但逐渐改变也会导致我的结果。

理想情况下,我想要访问stream的相机/ EXIF数据,看看它是注册未过滤的亮度,有没有办法做到这一点?

(我正在iOS 5及以上版本的设备上工作)

谢谢

适用于iOS 4.0及以上版本。 可以从CMSampleBufferRef获取EXIF信息。

//Import ImageIO & include framework in your project. #import <ImageIO/CGImageProperties.h> 

在您的示例缓冲区委托免费桥接将从CoreMedia的CMGetAttachment获得结果的NSDictionary。

 - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { NSDictionary* dict = (NSDictionary*)CMGetAttachment(sampleBuffer, kCGImagePropertyExifDictionary, NULL); 

完整的代码,在我自己的应用程序中使用:

 - (void)setupAVCapture { //-- Setup Capture Session. _session = [[AVCaptureSession alloc] init]; [_session beginConfiguration]; //-- Set preset session size. [_session setSessionPreset:AVCaptureSessionPreset1920x1080]; //-- Creata a video device and input from that Device. Add the input to the capture session. AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; if(videoDevice == nil) assert(0); //-- Add the device to the session. NSError *error; AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error]; if(error) assert(0); [_session addInput:input]; //-- Create the output for the capture session. AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init]; [dataOutput setAlwaysDiscardsLateVideoFrames:YES]; // Probably want to set this to NO when recording //-- Set to YUV420. [dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; // Necessary for manual preview // Set dispatch to be on the main thread so OpenGL can do things with the data [dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()]; [_session addOutput:dataOutput]; [_session commitConfiguration]; [_session startRunning]; } - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CFDictionaryRef metadataDict = CMCopyDictionaryOfAttachments(NULL, sampleBuffer, kCMAttachmentMode_ShouldPropagate); NSDictionary *metadata = [[NSMutableDictionary alloc] initWithDictionary:(__bridge NSDictionary*)metadataDict]; CFRelease(metadataDict); NSDictionary *exifMetadata = [[metadata objectForKey:(NSString *)kCGImagePropertyExifDictionary] mutableCopy]; self.autoBrightness = [[exifMetadata objectForKey:(NSString *)kCGImagePropertyExifBrightnessValue] floatValue]; float oldMin = -4.639957; // dark float oldMax = 4.639957; // light if (self.autoBrightness > oldMax) oldMax = self.autoBrightness; // adjust oldMax if brighter than expected oldMax self.lumaThreshold = ((self.autoBrightness - oldMin) * ((3.0 - 1.0) / (oldMax - oldMin))) + 1.0; NSLog(@"brightnessValue %f", self.autoBrightness); NSLog(@"lumaThreshold %f", self.lumaThreshold); } 

lumaThresholdvariables作为一个统一的variables发送到我的片段着色器,它将Y采样器纹理乘以基于环境亮度find理想的亮度。 现在,它使用后置摄像头; 我可能会切换到前置摄像头,因为我只是改变屏幕的“亮度”以适应室内/室外的观看,而用户的眼睛在摄像头的前面(而不是后面)。