使用AVCaptureAudioDataOutputSampleDelegate无法播放通过语音录制的audio

我一直在Google上search和研究,但我似乎无法得到这个工作,我无法find任何解决scheme,在互联网上。

我试图用麦克风捕捉我的声音,然后通过扬声器播放。

这是我的代码:

class ViewController: UIViewController, AVAudioRecorderDelegate, AVCaptureAudioDataOutputSampleBufferDelegate { var recordingSession: AVAudioSession! var audioRecorder: AVAudioRecorder! var captureSession: AVCaptureSession! var microphone: AVCaptureDevice! var inputDevice: AVCaptureDeviceInput! var outputDevice: AVCaptureAudioDataOutput! override func viewDidLoad() { super.viewDidLoad() recordingSession = AVAudioSession.sharedInstance() do{ try recordingSession.setCategory(AVAudioSessionCategoryPlayAndRecord) try recordingSession.setMode(AVAudioSessionModeVoiceChat) try recordingSession.setPreferredSampleRate(44000.00) try recordingSession.setPreferredIOBufferDuration(0.2) try recordingSession.setActive(true) recordingSession.requestRecordPermission() { [unowned self] (allowed: Bool) -> Void in DispatchQueue.main.async { if allowed { do{ self.microphone = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio) try self.inputDevice = AVCaptureDeviceInput.init(device: self.microphone) self.outputDevice = AVCaptureAudioDataOutput() self.outputDevice.setSampleBufferDelegate(self, queue: DispatchQueue.main) self.captureSession = AVCaptureSession() self.captureSession.addInput(self.inputDevice) self.captureSession.addOutput(self.outputDevice) self.captureSession.startRunning() } catch let error { print(error.localizedDescription) } } } } }catch let error{ print(error.localizedDescription) } } 

和callback函数:

 func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { var audioBufferList = AudioBufferList( mNumberBuffers: 1, mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil) ) var blockBuffer: CMBlockBuffer? var osStatus = CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer( sampleBuffer, nil, &audioBufferList, MemoryLayout<AudioBufferList>.size, nil, nil, UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment), &blockBuffer ) do { var data: NSMutableData = NSMutableData.init() for i in 0..<audioBufferList.mNumberBuffers { var audioBuffer = AudioBuffer( mNumberChannels: audioBufferList.mBuffers.mNumberChannels, mDataByteSize: audioBufferList.mBuffers.mDataByteSize, mData: audioBufferList.mBuffers.mData ) let frame = audioBuffer.mData?.load(as: Float32.self) data.append(audioBuffer.mData!, length: Int(audioBuffer.mDataByteSize)) } var dataFromNsData = Data.init(referencing: data) var avAudioPlayer: AVAudioPlayer = try AVAudioPlayer.init(data: dataFromNsData) avAudioPlayer.prepareToPlay() avAudioPlayer.play() } } catch let error { print(error.localizedDescription) //prints out The operation couldn't be completed. (OSStatus error 1954115647.) } 

任何帮助,这将是惊人的,这可能会帮助很多其他人,因为很多不完整的迅速版本在那里。

谢谢。

你非常接近! 您正在捕获didOutputSampleBuffercallback中的audio,但这是一个高频callback,因此您创build了大量的AVAudioPlayer并将它们传递给原始的LPCM数据,而他们只知道如何parsingCoreAudio文件types,然后它们仍然超出范围。

使用AVAudioEngineAVAudioPlayerNode ,您可以非常方便地播放使用AVCaptureSession捕获的缓冲区,但此时您还可以使用AVAudioEngine从麦克风录制:

 import UIKit import AVFoundation class ViewController: UIViewController { var engine = AVAudioEngine() override func viewDidLoad() { super.viewDidLoad() let input = engine.inputNode! let player = AVAudioPlayerNode() engine.attach(player) let bus = 0 let inputFormat = input.inputFormat(forBus: bus) engine.connect(player, to: engine.mainMixerNode, format: inputFormat) input.installTap(onBus: bus, bufferSize: 512, format: inputFormat) { (buffer, time) -> Void in player.scheduleBuffer(buffer) } try! engine.start() player.play() } }