捕捉照片时切换AVCaptureSession预设

我目前的设置如下(基于Brad Larson的ColorTrackingCamera项目):

我正在使用一个AVCaptureSession设置为AVCaptureSessionPreset640x480 ,我让输出运行通过OpenGL场景作为纹理。 这个纹理然后由片段着色器操纵。

我需要这个“低质量”预设,因为我想在用户预览时保持较高的帧速率。 然后我想在用户捕捉静止照片时切换到更高质量的输出。

首先,我想我可以改变AVCaptureSession上的sessionPreset ,但这迫使相机重新调整它的可用性。

 [captureSession beginConfiguration]; captureSession.sessionPreset = AVCaptureSessionPresetPhoto; [captureSession commitConfiguration]; 

目前我想添加第二个AVCaptureStillImageOutput到AVCaptureSession,但我得到一个空的pixelbuffer,所以我觉得我有点卡住了。

这是我的会话设置代码:

 ... // Add the video frame output [captureSession beginConfiguration]; videoOutput = [[AVCaptureVideoDataOutput alloc] init]; [videoOutput setAlwaysDiscardsLateVideoFrames:YES]; [videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; [videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()]; if ([captureSession canAddOutput:videoOutput]) { [captureSession addOutput:videoOutput]; } else { NSLog(@"Couldn't add video output"); } [captureSession commitConfiguration]; // Add still output [captureSession beginConfiguration]; stillOutput = [[AVCaptureStillImageOutput alloc] init]; if([captureSession canAddOutput:stillOutput]) { [captureSession addOutput:stillOutput]; } else { NSLog(@"Couldn't add still output"); } [captureSession commitConfiguration]; // Start capturing [captureSession setSessionPreset:AVCaptureSessionPreset640x480]; if(![captureSession isRunning]) { [captureSession startRunning]; }; ... 

这是我的捕获方法:

 - (void)prepareForHighResolutionOutput { AVCaptureConnection *videoConnection = nil; for (AVCaptureConnection *connection in stillOutput.connections) { for (AVCaptureInputPort *port in [connection inputPorts]) { if ([[port mediaType] isEqual:AVMediaTypeVideo] ) { videoConnection = connection; break; } } if (videoConnection) { break; } } [stillOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) { CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer); CVPixelBufferLockBaseAddress(pixelBuffer, 0); int width = CVPixelBufferGetWidth(pixelBuffer); int height = CVPixelBufferGetHeight(pixelBuffer); NSLog(@"%ix %i", width, height); CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); }]; } 

widthheight都是0)

我已经阅读了AVFoundation文档的文档,但似乎没有得到必要的东西。

我find了解决我的具体问题。 如果有人遇到同样的问题,我希望可以作为指导。

帧速率显着下降的原因与像素格式之间的内部转换有关。 显式设置像素格式后,帧率增加。

在我的情况下,我用以下方法创build了BGRA纹理:

 // Let Core Video create the OpenGL texture from pixelbuffer CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, videoTextureCache, pixelBuffer, NULL, GL_TEXTURE_2D, GL_RGBA, width, height, GL_BGRA, GL_UNSIGNED_BYTE, 0, &videoTexture); 

所以当我设置AVCaptureStillImageOutput实例时,我改变了我的代码:

 // Add still output stillOutput = [[AVCaptureStillImageOutput alloc] init]; [stillOutput setOutputSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; if([captureSession canAddOutput:stillOutput]) { [captureSession addOutput:stillOutput]; } else { NSLog(@"Couldn't add still output"); } 

我希望有一天能帮助别人;)