用AVAudioEngine从AVAudioPCMBuffer播放audio

我有两个类, MicrophoneHandlerAudioPlayer 。 我设法使用AVCaptureSession在这里使用批准的答案来挖掘麦克风数据,并使用此函数将CMSampleBuffer转换为NSData

 func sendDataToDelegate(buffer: CMSampleBuffer!) { let block = CMSampleBufferGetDataBuffer(buffer) var length = 0 var data: UnsafeMutablePointer<Int8> = nil var status = CMBlockBufferGetDataPointer(block!, 0, nil, &length, &data) // TODO: check for errors let result = NSData(bytesNoCopy: data, length: length, freeWhenDone: false) self.delegate.handleBuffer(result) } 

现在,我想通过将上面生成的NSData转换为AVAudioPCMBuffer并使用AVAudioEngine来播放audio。 我的AudioPlayer类如下所示:

 var engine: AVAudioEngine! var playerNode: AVAudioPlayerNode! var mixer: AVAudioMixerNode! override init() { super.init() self.setup() self.start() } func handleBuffer(data: NSData) { let newBuffer = self.toPCMBuffer(data) print(newBuffer) self.playerNode.scheduleBuffer(newBuffer, completionHandler: nil) } func setup() { self.engine = AVAudioEngine() self.playerNode = AVAudioPlayerNode() self.engine.attachNode(self.playerNode) self.mixer = engine.mainMixerNode engine.connect(self.playerNode, to: self.mixer, format: self.mixer.outputFormatForBus(0)) } func start() { do { try self.engine.start() } catch { print("error couldn't start engine") } self.playerNode.play() } func toPCMBuffer(data: NSData) -> AVAudioPCMBuffer { let audioFormat = AVAudioFormat(commonFormat: AVAudioCommonFormat.PCMFormatFloat32, sampleRate: 8000, channels: 2, interleaved: false) // given NSData audio format let PCMBuffer = AVAudioPCMBuffer(PCMFormat: audioFormat, frameCapacity: UInt32(data.length) / audioFormat.streamDescription.memory.mBytesPerFrame) PCMBuffer.frameLength = PCMBuffer.frameCapacity let channels = UnsafeBufferPointer(start: PCMBuffer.floatChannelData, count: Int(PCMBuffer.format.channelCount)) data.getBytes(UnsafeMutablePointer<Void>(channels[0]) , length: data.length) return PCMBuffer } 

self.delegate.handleBuffer(result)在上面的第一个代码段被调用时,缓冲区到达handleBuffer:buffer函数。

我可以print(newBuffer) ,并查看已转换的缓冲区的内存位置,但没有任何内容从扬声器中传出。 我只能想象到NSDataNSData之间的转换是不一致的。 有任何想法吗? 提前致谢。

跳过原始的NSData格式

为什么不使用AVAudioPlayer ? 如果你肯定需要NSData ,你总是可以从下面的soundURL加载这样的数据。 在这个例子中,磁盘缓冲区是这样的:

 let soundURL = documentDirectory.URLByAppendingPathComponent("sound.m4a") 

无论如何直接logging到文件以获得最佳内存和资源pipe理是有道理的。 你从这样的录音中获得NSData

 let data = NSFileManager.defaultManager().contentsAtPath(soundURL.path()) 

下面的代码是你所需要的:

logging

 if !audioRecorder.recording { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setActive(true) audioRecorder.record() } catch {} } 

 if (!audioRecorder.recording){ do { try audioPlayer = AVAudioPlayer(contentsOfURL: audioRecorder.url) audioPlayer.play() } catch {} } 

build立

 let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord) try audioRecorder = AVAudioRecorder(URL: self.directoryURL()!, settings: recordSettings) audioRecorder.prepareToRecord() } catch {} 

设置

 let recordSettings = [AVSampleRateKey : NSNumber(float: Float(44100.0)), AVFormatIDKey : NSNumber(int: Int32(kAudioFormatMPEG4AAC)), AVNumberOfChannelsKey : NSNumber(int: 1), AVEncoderAudioQualityKey : NSNumber(int: Int32(AVAudioQuality.Medium.rawValue))] 

下载Xcode项目:

你可以在这里find这个例子。 从Swift Recipes下载整个项目,在模拟器和设备上logging和播放。