使用增强现实录制video的最佳方式是什么?

使用增强现实录制video的最佳方式是什么? (从iPhone / iPad相机中添加文字,图像徽标到相框)

以前我试图找出如何绘制到CIImage ( 如何将文本绘制到CIImage中? )并将CIImage转换回CMSampleBuffer ( CIImage回到CMSampleBuffer )

我几乎做了所有事情,只是在AVAssetWriterInput使用新的CMSampleBuffer录制video时AVAssetWriterInput

但是这个解决方案CVPixelBuffer上并不好,它在将CIImage转换为CVPixelBuffer大量CPU( ciContext.render(ciImage!, to: aBuffer)

所以我想在这里停下来找一些其他方法来录制带有增强现实的video(例如,在将video编码为mp4文件的同时在帧内动态添加(绘制)文本)

在这里,我尝试过,不想再使用了……

 // convert original CMSampleBuffer to CIImage, // combine multiple `CIImage`s into one (adding augmented reality - // text or some additional images) let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)! let ciimage : CIImage = CIImage(cvPixelBuffer: pixelBuffer) var outputImage: CIImage? let images : Array = [ciimage, ciimageSec!] // add all your CIImages that you'd like to combine for image in images { outputImage = outputImage == nil ? image : image.composited(over: outputImage!) } // allocate this class variable once if pixelBufferNew == nil { CVPixelBufferCreate(kCFAllocatorSystemDefault, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer), kCVPixelFormatType_32BGRA, nil, &pixelBufferNew) } // convert CIImage to CVPixelBuffer let ciContext = CIContext(options: nil) if let aBuffer = pixelBufferNew { ciContext.render(outputImage!, to: aBuffer) // >>> IT EATS A LOT OF <<< CPU } // convert new CVPixelBuffer to new CMSampleBuffer var sampleTime = CMSampleTimingInfo() sampleTime.duration = CMSampleBufferGetDuration(sampleBuffer) sampleTime.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) sampleTime.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer) var videoInfo: CMVideoFormatDescription? = nil CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBufferNew!, &videoInfo) var oBuf: CMSampleBuffer? CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBufferNew!, true, nil, nil, videoInfo!, &sampleTime, &oBuf) /* try to append new CMSampleBuffer into a file (.mp4) using AVAssetWriter & AVAssetWriterInput... (I met errors with it, original buffer works ok - "from func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)") */* 

有没有更好的解决方案?

现在我回答我自己的问题

最好的方法是使用Objective-C++类( .mm )我们可以使用OpenCV并在处理后从CMSampleBuffer轻松/快速转换为cv::Mat并返回CMSampleBuffer

我们可以从Swift轻松调用Objective-C ++函数