CVOpenGLESTextureCacheCreateTextureFromImage无法创buildIOSurface

对于我目前的项目,我正在阅读iPhone的主要摄像头输出。 然后通过方法CVOpenGLESTextureCacheCreateTextureFromImage将pixelbuffer转换为caching的OpenGL纹理。 这在处理用于预览的相机框架时效果很好。 testing与iPhone 3GS,4,4S,iPod Touch(第四代)和IOS5,IOS6的不同组合。

但是,对于具有非常高分辨率的实际最终图像,这仅适用于这些组合:

  • iPhone 3GS + IOS 5.1.1
  • iPhone 4 + IOS 5.1.1
  • iPhone 4S + IOS 6.0
  • iPod Touch(第四代)+ IOS 5.0

这不适用于:iPhone 4 + IOS6。

控制台中的确切错误消息:

 Failed to create IOSurface image (texture) 2012-10-01 16:24:30.663 GLCameraRipple[676:907] Error at CVOpenGLESTextureCacheCreateTextureFromImage -6683 

我通过改变苹果的GLCameraRipple项目来隔离这个问题。 你可以看看我的版本在这里: http : //lab.bitshiftcop.com/iosurface.zip

以下是我如何将stilloutput添加到当前会话:

 - (void)setupAVCapture { //-- Create CVOpenGLESTextureCacheRef for optimal CVImageBufferRef to GLES texture conversion. CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, [EAGLContext currentContext], NULL, &_videoTextureCache); if (err) { NSLog(@"Error at CVOpenGLESTextureCacheCreate %d", err); return; } //-- Setup Capture Session. _session = [[AVCaptureSession alloc] init]; [_session beginConfiguration]; //-- Set preset session size. [_session setSessionPreset:_sessionPreset]; //-- Creata a video device and input from that Device. Add the input to the capture session. AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; if(videoDevice == nil) assert(0); //-- Add the device to the session. NSError *error; AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error]; if(error) assert(0); [_session addInput:input]; //-- Create the output for the capture session. AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init]; [dataOutput setAlwaysDiscardsLateVideoFrames:YES]; // Probably want to set this to NO when recording //-- Set to YUV420. [dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; // Necessary for manual preview // Set dispatch to be on the main thread so OpenGL can do things with the data [dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()]; // Add still output stillOutput = [[AVCaptureStillImageOutput alloc] init]; [stillOutput setOutputSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; if([_session canAddOutput:stillOutput]) [_session addOutput:stillOutput]; [_session addOutput:dataOutput]; [_session commitConfiguration]; [_session startRunning]; } 

这里是我如何捕获仍然输出和处理它:

 - (void)capturePhoto { AVCaptureConnection *videoConnection = nil; for (AVCaptureConnection *connection in stillOutput.connections) { for (AVCaptureInputPort *port in [connection inputPorts]) { if ([[port mediaType] isEqual:AVMediaTypeVideo] ) { videoConnection = connection; break; } } if (videoConnection) { break; } } [stillOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) { // Process hires image [self captureOutput:stillOutput didOutputSampleBuffer:imageSampleBuffer fromConnection:videoConnection]; }]; } 

这是如何创build纹理:

 - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVReturn err; CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); size_t width = CVPixelBufferGetWidth(pixelBuffer); size_t height = CVPixelBufferGetHeight(pixelBuffer); if (!_videoTextureCache) { NSLog(@"No video texture cache"); return; } if (_ripple == nil || width != _textureWidth || height != _textureHeight) { _textureWidth = width; _textureHeight = height; _ripple = [[RippleModel alloc] initWithScreenWidth:_screenWidth screenHeight:_screenHeight meshFactor:_meshFactor touchRadius:5 textureWidth:_textureWidth textureHeight:_textureHeight]; [self setupBuffers]; } [self cleanUpTextures]; NSLog(@"%zi x %zi", _textureWidth, _textureHeight); // RGBA texture glActiveTexture(GL_TEXTURE0); err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _videoTextureCache, pixelBuffer, NULL, GL_TEXTURE_2D, GL_RGBA, _textureWidth, _textureHeight, GL_BGRA, GL_UNSIGNED_BYTE, 0, &_chromaTexture); if (err) { NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err); } glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture)); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); } 

有任何解决这个问题的build议吗?

iPhone 4(以及iPhone 3GS和iPod Touch第四代)使用PowerVR SGX 535 GPU,其最大OpenGL ES纹理尺寸为2048×2048 。 这个值可以通过调用find

 glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTextureSize); 

iPod Touch第四代。 有一个摄像头分辨率为720×960和iPhone 3GS,640×1136,但iPhone 4的后置摄像头分辨率是1936×2592,这是太大,以适应一个单一的纹理。

您可以始终以较小的尺寸重写所拍摄的图像,同时保留宽高比(1529×2048)。 Brad Larson在他的GPUImage框架上完成了这个工作 ,但是它非常简单,只需使用Core Graphics重绘原始像素缓冲区的数据,然后再绘制另一个像素缓冲区。 框架的其余部分也是一个很好的资源。

我们不能将静止图像纹理投射到CVOpenGLESTextureCacheRef。 核心video可让您将video帧直接映射到OpenGL纹理。 Core Video使用video缓冲区创build纹理,并将它们提供给我们,已经存储在video内存中。

要创buildopengles纹理,这个链接可以帮助你链接