低延迟input/输出AudioQueue
我有两个iOS AudioQueues – 一个input,直接将样本提供给一个输出。 不幸的是,有一个很明显的回声效果:(
是否有可能使用AudioQueues做低延迟audio,还是我真的需要使用AudioUnits? (我已经尝试了使用AudioUnits的Novocaine框架,在这里延迟要小得多,我也注意到这个框架似乎使用了更less的CPU资源,不幸的是,我没有对Swift项目进行重大修改。)
下面是我的代码的一些摘录,主要是在Swift中完成的,除了那些需要在C中实现的callback。
private let audioStreamBasicDescription = AudioStreamBasicDescription( mSampleRate: 16000, mFormatID: AudioFormatID(kAudioFormatLinearPCM), mFormatFlags: AudioFormatFlags(kAudioFormatFlagsNativeFloatPacked), mBytesPerPacket: 4, mFramesPerPacket: 1, mBytesPerFrame: 4, mChannelsPerFrame: 1, mBitsPerChannel: 32, mReserved: 0) private let numberOfBuffers = 80 private let bufferSize: UInt32 = 256 private var active = false private var inputQueue: AudioQueueRef = nil private var outputQueue: AudioQueueRef = nil private var inputBuffers = [AudioQueueBufferRef]() private var outputBuffers = [AudioQueueBufferRef]() private var headOfFreeOutputBuffers: AudioQueueBufferRef = nil // callbacks implemented in Swift private func audioQueueInputCallback(inputBuffer: AudioQueueBufferRef) { if active { if headOfFreeOutputBuffers != nil { let outputBuffer = headOfFreeOutputBuffers headOfFreeOutputBuffers = AudioQueueBufferRef(outputBuffer.memory.mUserData) outputBuffer.memory.mAudioDataByteSize = inputBuffer.memory.mAudioDataByteSize memcpy(outputBuffer.memory.mAudioData, inputBuffer.memory.mAudioData, Int(inputBuffer.memory.mAudioDataByteSize)) assert(AudioQueueEnqueueBuffer(outputQueue, outputBuffer, 0, nil) == 0) } else { println(__FUNCTION__ + ": out-of-output-buffers!") } assert(AudioQueueEnqueueBuffer(inputQueue, inputBuffer, 0, nil) == 0) } } private func audioQueueOutputCallback(outputBuffer: AudioQueueBufferRef) { if active { outputBuffer.memory.mUserData = UnsafeMutablePointer<Void>(headOfFreeOutputBuffers) headOfFreeOutputBuffers = outputBuffer } } func start() { var error: NSError? audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord, withOptions: .allZeros, error: &error) dumpError(error, functionName: "AVAudioSessionCategoryPlayAndRecord") audioSession.setPreferredSampleRate(16000, error: &error) dumpError(error, functionName: "setPreferredSampleRate") audioSession.setPreferredIOBufferDuration(0.005, error: &error) dumpError(error, functionName: "setPreferredIOBufferDuration") audioSession.setActive(true, error: &error) dumpError(error, functionName: "setActive(true)") assert(active == false) active = true // cannot provide callbacks to AudioQueueNewInput/AudioQueueNewOutput from Swift and so need to interface C functions assert(MyAudioQueueConfigureInputQueueAndCallback(audioStreamBasicDescription, &inputQueue, audioQueueInputCallback) == 0) assert(MyAudioQueueConfigureOutputQueueAndCallback(audioStreamBasicDescription, &outputQueue, audioQueueOutputCallback) == 0) for (var i = 0; i < numberOfBuffers; i++) { var audioQueueBufferRef: AudioQueueBufferRef = nil assert(AudioQueueAllocateBuffer(inputQueue, bufferSize, &audioQueueBufferRef) == 0) assert(AudioQueueEnqueueBuffer(inputQueue, audioQueueBufferRef, 0, nil) == 0) inputBuffers.append(audioQueueBufferRef) assert(AudioQueueAllocateBuffer(outputQueue, bufferSize, &audioQueueBufferRef) == 0) outputBuffers.append(audioQueueBufferRef) audioQueueBufferRef.memory.mUserData = UnsafeMutablePointer<Void>(headOfFreeOutputBuffers) headOfFreeOutputBuffers = audioQueueBufferRef } assert(AudioQueueStart(inputQueue, nil) == 0) assert(AudioQueueStart(outputQueue, nil) == 0) }
然后我的C代码将callback设置回Swift:
static void MyAudioQueueAudioInputCallback(void * inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp * inStartTime, UInt32 inNumberPacketDescriptions, const AudioStreamPacketDescription * inPacketDescs) { void(^block)(AudioQueueBufferRef) = (__bridge void(^)(AudioQueueBufferRef))inUserData; block(inBuffer); } static void MyAudioQueueAudioOutputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) { void(^block)(AudioQueueBufferRef) = (__bridge void(^)(AudioQueueBufferRef))inUserData; block(inBuffer); } OSStatus MyAudioQueueConfigureInputQueueAndCallback(AudioStreamBasicDescription inFormat, AudioQueueRef *inAQ, void(^callback)(AudioQueueBufferRef)) { return AudioQueueNewInput(&inFormat, MyAudioQueueAudioInputCallback, (__bridge_retained void *)([callback copy]), nil, nil, 0, inAQ); } OSStatus MyAudioQueueConfigureOutputQueueAndCallback(AudioStreamBasicDescription inFormat, AudioQueueRef *inAQ, void(^callback)(AudioQueueBufferRef)) { return AudioQueueNewOutput(&inFormat, MyAudioQueueAudioOutputCallback, (__bridge_retained void *)([callback copy]), nil, nil, 0, inAQ); }
经过一段时间,我发现这个伟大的职位使用AudioUnits而不是AudioQueues。 我只是将其移植到Swift,然后简单地添加:
audioSession.setPreferredIOBufferDuration(0.005, error: &error)
如果您正在从麦克风录制audio并在该麦克风的听力范围内回放audio,则由于audio吞吐量不是瞬时的,因此您之前的某些输出会将其转换为新的input,因此会导致回声。 这种现象被称为反馈 。
这是一个结构性问题,所以更改录制API不会有帮助(尽pipe更改录制/回放缓冲区大小可以控制回显中的延迟)。 您可以以麦克风无法听到的方式播放audio(例如,根本没有,或者通过耳机),或者沿着回声消除的兔子孔进行播放。