使用IOSurface支持从YUV创buildCVPixelBuffer

所以我从networkingcallback(voip应用程序)获取3个独立arrays中的原始YUV数据。 据我所知,你不能创buildIOSurface支持像素缓冲区与CVPixelBufferCreateWithPlanarBytes根据这里

重要说明:不能使用CVPixelBufferCreateWithBytes()或CVPixelBufferCreateWithPlanarBytes()与kCVPixelBufferIOSurfacePropertiesKey。 调用CVPixelBufferCreateWithBytes()或CVPixelBufferCreateWithPlanarBytes()将导致不是IOSurface支持的CVPixelBuffers

因此,您必须使用CVPixelBufferCreate创build它,但是如何将数据从调用传输回您创build的CVPixelBufferRef

 - (void)videoCallBack(uint8_t *yPlane, uint8_t *uPlane, uint8_t *vPlane, size_t width, size_t height, size_t stride yStride, size_t uStride, size_t vStride) NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}}; CVPixelBufferRef pixelBuffer = NULL; CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, (__bridge CFDictionaryRef)(pixelAttributes), &pixelBuffer); 

我不确定在这之后要做什么? 最终我想把它变成一个CIImage,然后我可以使用我的GLKView来渲染video。 人们在创build数据时如何将数据“放入”缓冲区?

我明白了,这是相当微不足道的。 以下是完整的代码。 唯一的问题是我得到一个BSXPCMessage received error for message: Connection interrupted ,需要一段时间的video显示。

 NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}}; CVPixelBufferRef pixelBuffer = NULL; CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, (__bridge CFDictionaryRef)(pixelAttributes), &pixelBuffer); CVPixelBufferLockBaseAddress(pixelBuffer, 0); uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0); memcpy(yDestPlane, yPlane, width * height); uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1); memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma); CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); if (result != kCVReturnSuccess) { DDLogWarn(@"Unable to create cvpixelbuffer %d", result); } CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; //success! CVPixelBufferRelease(pixelBuffer); 

我忘了添加代码来交叉两个U和V平面,但这不应该太糟糕。

我有一个类似的问题,这里是我在SWIFT 2.0中提供的信息,我从其他问题或链接的答案中获得信息。

 func generatePixelBufferFromYUV2(inout yuvFrame: YUVFrame) -> CVPixelBufferRef? { var uIndex: Int var vIndex: Int var uvDataIndex: Int var pixelBuffer: CVPixelBufferRef? = nil var err: CVReturn; if (m_pixelBuffer == nil) { err = CVPixelBufferCreate(kCFAllocatorDefault, yuvFrame.width, yuvFrame.height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, nil, &pixelBuffer) if (err != 0) { NSLog("Error at CVPixelBufferCreate %d", err) return nil } } if (pixelBuffer != nil) { CVPixelBufferLockBaseAddress(pixelBuffer!, 0) let yBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer!, 0) if (yBaseAddress != nil) { let yData = UnsafeMutablePointer<UInt8>(yBaseAddress) let yDataPtr = UnsafePointer<UInt8>(yuvFrame.luma.bytes) // Y-plane data memcpy(yData, yDataPtr, yuvFrame.luma.length) } let uvBaseAddress = CVPixelBufferGetBaseAddressOfPlane(m_pixelBuffer!, 1) if (uvBaseAddress != nil) { let uvData = UnsafeMutablePointer<UInt8>(uvBaseAddress) let pUPointer = UnsafePointer<UInt8>(yuvFrame.chromaB.bytes) let pVPointer = UnsafePointer<UInt8>(yuvFrame.chromaR.bytes) // For the uv data, we need to interleave them as uvuvuvuv.... let iuvRow = (yuvFrame.chromaB.length*2/yuvFrame.width) let iHalfWidth = yuvFrame.width/2 for i in 0..<iuvRow { for j in 0..<(iHalfWidth) { // UV data for original frame. Just interleave them. uvDataIndex = i*iHalfWidth+j uIndex = (i*yuvFrame.width) + (j*2) vIndex = uIndex + 1 uvData[uIndex] = pUPointer[uvDataIndex] uvData[vIndex] = pVPointer[uvDataIndex] } } } CVPixelBufferUnlockBaseAddress(pixelBuffer!, 0) } return pixelBuffer } 

注意:yuvFrame是一个具有y,u和v计划缓冲区以及宽度和高度的结构。 另外,我有CFDictionary? 将CVPixelBufferCreate(…)中的参数设置为nil。 如果我给IOSurface属性,它会失败,并抱怨它不是IOSurface支持或错误-6683。

访问这些链接的更多信息:这个链接是关于UV交错: 如何转换从YUV到CIImage for iOS

和相关的问题: CVOpenGLESTextureCacheCreateTextureFromImage返回错误6683

Interesting Posts