使用AVAssetWriter损坏video捕获音频和video

我正在使用AVCaptureSession来使用video和音频输入,并使用AVAssetWriter对H.264video进行AVAssetWriter

如果我不写音频,video将按预期编码。 但是,如果我写音频,我会收到一个腐败的video。

如果我检查提供给AVAssetWriter的音频CMSampleBuffer ,它会显示以下信息:

 invalid = NO dataReady = YES makeDataReadyCallback = 0x0 makeDataReadyRefcon = 0x0 formatDescription =  { mediaType:'soun' mediaSubType:'lpcm' mediaSpecific: { ASBD: { mSampleRate: 44100.000000 mFormatID: 'lpcm' mFormatFlags: 0xc mBytesPerPacket: 2 mFramesPerPacket: 1 mBytesPerFrame: 2 mChannelsPerFrame: 1 mBitsPerChannel: 16 } cookie: {(null)} ACL: {(null)} FormatList Array: {(null)} } extensions: {(null)} 

由于它提供了lpcm音频,我已经为声音设置了AVAssetWriterInput (我尝试了一个和两个通道):

 var channelLayout = AudioChannelLayout() memset(&channelLayout, 0, MemoryLayout.size); channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Mono let audioOutputSettings:[String: Any] = [AVFormatIDKey as String:UInt(kAudioFormatLinearPCM), AVNumberOfChannelsKey as String:1, AVSampleRateKey as String:44100.0, AVLinearPCMIsBigEndianKey as String:false, AVLinearPCMIsFloatKey as String:false, AVLinearPCMBitDepthKey as String:16, AVLinearPCMIsNonInterleaved as String:false, AVChannelLayoutKey: NSData(bytes:&channelLayout, length:MemoryLayout.size)] self.assetWriterAudioInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioOutputSettings) self.assetWriter.add(self.assetWriterAudioInput) 

当我使用上面的lpcm设置时,我无法使用任何应用程序打开video。 我已经尝试过使用kAudioFormatMPEG4AACkAudioFormatAppleLossless但我仍然得到一个损坏的video,但我能够使用QuickTime Player 8(而不是QuickTime Player 7)观看video,但它对video的持续时间感到困惑,没有播放声音。

录音完成后我打电话给:

 func endRecording(_ completionHandler: @escaping () -> ()) { isRecording = false assetWriterVideoInput.markAsFinished() assetWriterAudioInput.markAsFinished() assetWriter.finishWriting(completionHandler: completionHandler) } 

这就是AVCaptureSession的配置方式:

 func setupCapture() { captureSession = AVCaptureSession() if (captureSession == nil) { fatalError("ERROR: Couldnt create a capture session") } captureSession?.beginConfiguration() captureSession?.sessionPreset = AVCaptureSessionPreset1280x720 let frontDevices = AVCaptureDevice.devices().filter{ ($0 as AnyObject).hasMediaType(AVMediaTypeVideo) && ($0 as AnyObject).position == AVCaptureDevicePosition.front } if let captureDevice = frontDevices.first as? AVCaptureDevice { do { let videoDeviceInput: AVCaptureDeviceInput do { videoDeviceInput = try AVCaptureDeviceInput(device: captureDevice) } catch { fatalError("Could not create AVCaptureDeviceInput instance with error: \(error).") } guard (captureSession?.canAddInput(videoDeviceInput))! else { fatalError() } captureSession?.addInput(videoDeviceInput) } } do { let audioDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio) let audioDeviceInput: AVCaptureDeviceInput do { audioDeviceInput = try AVCaptureDeviceInput(device: audioDevice) } catch { fatalError("Could not create AVCaptureDeviceInput instance with error: \(error).") } guard (captureSession?.canAddInput(audioDeviceInput))! else { fatalError() } captureSession?.addInput(audioDeviceInput) } do { let dataOutput = AVCaptureVideoDataOutput() dataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String : kCVPixelFormatType_32BGRA] dataOutput.alwaysDiscardsLateVideoFrames = true let queue = DispatchQueue(label: "com.3DTOPO.videosamplequeue") dataOutput.setSampleBufferDelegate(self, queue: queue) guard (captureSession?.canAddOutput(dataOutput))! else { fatalError() } captureSession?.addOutput(dataOutput) videoConnection = dataOutput.connection(withMediaType: AVMediaTypeVideo) } do { let audioDataOutput = AVCaptureAudioDataOutput() let queue = DispatchQueue(label: "com.3DTOPO.audiosamplequeue") audioDataOutput.setSampleBufferDelegate(self, queue: queue) guard (captureSession?.canAddOutput(audioDataOutput))! else { fatalError() } captureSession?.addOutput(audioDataOutput) audioConnection = audioDataOutput.connection(withMediaType: AVMediaTypeAudio) } captureSession?.commitConfiguration() // this will trigger capture on its own queue captureSession?.startRunning() } 

AVCaptureVideoDataOutput委托方法:

 func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { // func captureOutput(captureOutput: AVCaptureOutput, sampleBuffer: CMSampleBuffer, connection:AVCaptureConnection) { var error: CVReturn if (connection == audioConnection) { delegate?.audioSampleUpdated(sampleBuffer: sampleBuffer) return } // ... Write video buffer ...// } 

哪个电话:

 func audioSampleUpdated(sampleBuffer: CMSampleBuffer) { if (isRecording) { while !assetWriterAudioInput.isReadyForMoreMediaData {} if (!assetWriterAudioInput.append(sampleBuffer)) { print("Unable to write to audio input"); } } } 

如果我禁用上面的assetWriterAudioInput.append()调用,那么video没有损坏,但当然我没有音频编码。 如何使video和音频编码都能正常工作?

我想到了。 我将assetWriter.startSession源时间设置为0,然后从当前CACurrentMediaTime()减去开始时间以写入像素数据。

我将assetWriter.startSession源时间更改为CACurrentMediaTime()并且在写入video帧时不减去当前时间。

旧的开始会话代码:

 assetWriter.startWriting() assetWriter.startSession(atSourceTime: kCMTimeZero) 

适用的新代码:

 let presentationStartTime = CMTimeMakeWithSeconds(CACurrentMediaTime(), 240) assetWriter.startWriting() assetWriter.startSession(atSourceTime: presentationStartTime)