ios – 从纹理caching创build的video是黑色的

我正在尝试使用Brad Larson的答案在iOS上进行高效的video处理。 这个答案是关于如何在不使用glReadPixels情况下高效地获得像素缓冲区的输出。 据我所知,必须从AVAssetWriterInputPixelBufferAdaptor pixelBufferPool中加载一个像素缓冲区,将其链接起来,然后在每个渲染循环之后调用

 CVPixelBufferUnlockBaseAddress(buffer, CVPixelBufferLockFlags(rawValue: 0)) writerAdaptor?.append(buffer, withPresentationTime: currentTime) 

但是,当我尝试这样做,输出video是黑色的。 原来的答案显示了代码片段,而不是完整的设置。 我也看过GPUImage,但令人惊讶的是它使用glReadPixels : https : //github.com/BradLarson/GPUImage/blob/167b0389bc6e9dc4bb0121550f91d8d5d6412c53/framework/Source/Mac/GPUImageMovieWriter.m#L501

这里是我想要工作的代码的一个稍微简化的版本:

1)开始摄像机录制

  override func viewDidLoad() { // Start the camera recording session = AVCaptureSession() session.sessionPreset = AVCaptureSessionPreset1920x1080 // Input setup. device = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo).first as? AVCaptureDevice input = try? AVCaptureDeviceInput(device: device) session.addInput(input) // Output setup. let output = AVCaptureVideoDataOutput() output.alwaysDiscardsLateVideoFrames = true output.videoSettings = [ kCVPixelBufferPixelFormatTypeKey as AnyHashable: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange ] session.addOutput(output) output.setSampleBufferDelegate(self, queue: .main) setUpWriter() } 

2)启动video编写器

  func setUpWriter() { // writer: AVAssetWriter // input: AVAssetWriterInput let attributes: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey as String: 400, kCVPixelBufferHeightKey as String: 720, AVVideoScalingModeKey as String: AVVideoScalingModeFit, kCVPixelFormatOpenGLESCompatibility as String: true, kCVPixelBufferIOSurfacePropertiesKey as String: [:], ] let adaptor = AVAssetWriterInputPixelBufferAdaptor( assetWriterInput: input, sourcePixelBufferAttributes: attributes) setUpTextureCache(in: adaptor.bufferPool!) writer.add(input) writer.startWriting() writer.startSession(atSourceTime: currentTime) } 

3)像https://stackoverflow.com/a/9704392/2054629中设置caching

  func setUpTextureCache(in pool: CVPixelBufferPool) { var renderTarget: CVPixelBuffer? = nil var renderTexture: CVOpenGLESTexture? = nil var coreVideoTextureCache: CVOpenGLESTextureCache? = nil var err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, nil, context, nil, &coreVideoTextureCache) if err != kCVReturnSuccess { print("Error at CVOpenGLESTextureCacheCreate \(err)") } err = CVPixelBufferPoolCreatePixelBuffer( nil, bufferPool, &renderTarget ) if err != kCVReturnSuccess { print("Error at CVPixelBufferPoolCreatePixelBuffer \(err)") } err = CVOpenGLESTextureCacheCreateTextureFromImage( kCFAllocatorDefault, coreVideoTextureCache!, renderTarget!, nil, GLenum(GL_TEXTURE_2D), GL_RGBA, GLsizei(400), GLsizei(720), GLenum(GL_BGRA), GLenum(GL_UNSIGNED_BYTE), 0, &renderTexture ) if err != kCVReturnSuccess { print("Error at CVOpenGLESTextureCacheCreateTextureFromImage \(err)") } glBindTexture(CVOpenGLESTextureGetTarget(renderTexture!), CVOpenGLESTextureGetName(renderTexture!)) glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GLfloat(GL_CLAMP_TO_EDGE)) glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GLfloat(GL_CLAMP_TO_EDGE)) glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), CVOpenGLESTextureGetName(renderTexture!), 0) self.buffer = renderTarget } 

4)将绘制的框架添加到正在录制的video中

  func screenshot(_ frame: CVPixelBuffer) { glClearColor(0, 1, 0, 1) // the output should at least be green // do stuff, draw triangles from the frame etc... Without anything I'm at least expecting the output to be green CVPixelBufferLockBaseAddress(buffer, CVPixelBufferLockFlags(rawValue: 0)) currentTime = CMTimeAdd(currentTime, frameLength) writerAdaptor?.append(buffer, withPresentationTime: currentTime) } 

5)在来自相机的每个缓冲区中,对其进行处理并将结果附加到video中

 extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate { func captureOutput(_ captureOutput: AVCaptureOutput?, didOutputSampleBuffer sampleBuffer: CMSampleBuffer?, from connection: AVCaptureConnection?) { guard let sampleBuffer = sampleBuffer, let frame = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } screenshot(frame) } } 

为了简单起见,我删除了opengl程序部分。 即使没有它,我期待输出为绿色,因为我打电话glClearColor(0, 1, 0, 1)