使用AVFoundation Swift IOS录制video和audio

在这里,我能够成功地抓住录制的video

基本上

  1. AVCaptureFileOutputRecordingDelegate原型inheritance
  2. 循环可用的设备
  3. 使用相机创build会话
  4. 开始录制
  5. 停止录制
  6. 通过实现上述原型的方法获取loggingvideo

但该文件不随audio。

根据这个问题,我必须单独录制audio,并使用提到的类来合并video和audio

但是我不知道如何同时实现video和audio录制。

 for device in devices { // Make sure this particular device supports video if (device.hasMediaType(AVMediaTypeVideo)) { // Finally check the position and confirm we've got the back camera if(device.position == AVCaptureDevicePosition.Back) { captureDevice = device as? AVCaptureDevice if captureDevice != nil { print("Capture device found") beginSession() } } } } 

在这个循环中,只有可用的设备types是.Front和.Back

find答案,这个答案是用这个代码

它可以简单地通过

  1. 声明另一个捕获设备variables
  2. 循环访问设备并初始化摄像头和audio捕捉设备variables
  3. 将audioinput添加到会话

 var captureDevice : AVCaptureDevice? var captureAudio :AVCaptureDevice? 

循环访问设备和初始化捕获设备

 var captureDeviceVideoFound: Bool = false var captureDeviceAudioFound:Bool = false // Loop through all the capture devices on this phone for device in devices { // Make sure this particular device supports video if (device.hasMediaType(AVMediaTypeVideo)) { // Finally check the position and confirm we've got the front camera if(device.position == AVCaptureDevicePosition.Front) { captureDevice = device as? AVCaptureDevice //initialize video if captureDevice != nil { print("Capture device found") captureDeviceVideoFound = true; } } } if(device.hasMediaType(AVMediaTypeAudio)){ print("Capture device audio init") captureAudio = device as? AVCaptureDevice //initialize audio captureDeviceAudioFound = true } } if(captureDeviceAudioFound && captureDeviceVideoFound){ beginSession() } 

内部会议

 try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice)) try captureSession.addInput(AVCaptureDeviceInput(device: captureAudio)) 

这将输出audio的video文件。 不需要合并audio或做任何事情。

这个苹果文档有帮助