在Swift 3中使用Audio Queue获取麦克风input

我正在开发一个应用程序,通过内置麦克风logging语音,并将其发送到服务器现场。 所以我需要在录制时从麦克风获取字节stream。

谷歌search和堆栈溢出了一段时间之后,我想我弄明白了它应该如何工作,但事实并非如此。 我认为使用audio队列可能是要走的路。

这是我到目前为止所尝试的:

func test() { func callback(_ a :UnsafeMutableRawPointer?, _ b : AudioQueueRef, _ c :AudioQueueBufferRef, _ d :UnsafePointer<AudioTimeStamp>, _ e :UInt32, _ f :UnsafePointer<AudioStreamPacketDescription>?) { print("test") } var inputQueue: AudioQueueRef? = nil var aqData = AQRecorderState( mDataFormat: AudioStreamBasicDescription( mSampleRate: 16000, mFormatID: kAudioFormatLinearPCM, mFormatFlags: 0, mBytesPerPacket: 2, mFramesPerPacket: 1, // Must be set to 1 for uncomressed formats mBytesPerFrame: 2, mChannelsPerFrame: 1, // Mono recording mBitsPerChannel: 2 * 8, // 2 Bytes mReserved: 0), // Must be set to 0 according to https://developer.apple.com/reference/coreaudio/audiostreambasicdescription mQueue: inputQueue!, mBuffers: [AudioQueueBufferRef](), bufferByteSize: 32, mCurrentPacket: 0, mIsRunning: true) var error = AudioQueueNewInput(&aqData.mDataFormat, callback, nil, nil, nil, 0, &inputQueue) AudioQueueStart(inputQueue!, nil) } 

它编译和应用程序启动,但只要我叫test()我得到一个exception:

致命错误:意外地发现零,而解包一个可选值

exception是由于

 mQueue: inputQueue! 

我明白为什么会发生这种情况(inputQueue没有价值),但我不知道如何正确初始化inputQueue。 问题是audio队列对于Swift用户来说logging很差,我在互联网上没有find任何工作的例子。

有谁可以告诉我我做错了什么?

使用AudioQueueNewInput(...) (或输出) 您使用之前初始化您的audio队列:

 let sampleRate = 16000 let numChannels = 2 var inFormat = AudioStreamBasicDescription( mSampleRate: Double(sampleRate), mFormatID: kAudioFormatLinearPCM, mFormatFlags: kAudioFormatFlagsNativeFloatPacked, mBytesPerPacket: UInt32(numChannels * MemoryLayout<UInt32>.size), mFramesPerPacket: 1, mBytesPerFrame: UInt32(numChannels * MemoryLayout<UInt32>.size), mChannelsPerFrame: UInt32(numChannels), mBitsPerChannel: UInt32(8 * (MemoryLayout<UInt32>.size)), mReserved: UInt32(0) var inQueue: AudioQueueRef? = nil AudioQueueNewInput(&inFormat, callback, nil, nil, nil, 0, &inQueue) var aqData = AQRecorderState( mDataFormat: inFormat, mQueue: inQueue!, // inQueue is initialized now and can be unwrapped mBuffers: [AudioQueueBufferRef](), bufferByteSize: 32, mCurrentPacket: 0, mIsRunning: true) 

在苹果文档中查找详细信息

这个代码来自我们的项目工作正常:

 AudioBuffer * buff; AudioQueueRef queue; AudioStreamBasicDescription fmt = { 0 }; static void HandleInputBuffer ( void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc ) { } - (void) initialize { thisClass = self; NSError * error; fmt.mFormatID = kAudioFormatLinearPCM; fmt.mSampleRate = 44100.0; fmt.mChannelsPerFrame = 1; fmt.mBitsPerChannel = 16; fmt.mChannelsPerFrame = 1; fmt.mFramesPerPacket = 1; fmt.mBytesPerFrame = sizeof (SInt16); fmt.mBytesPerPacket = sizeof (SInt16); fmt.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked; OSStatus status = AudioQueueNewInput ( // 1 &fmt, // 2 HandleInputBuffer, // 3 &aqData, // 4 NULL, // 5 kCFRunLoopCommonModes, // 6 0, // 7 &queue // 8 ); AudioQueueBufferRef buffers[kNumberBuffers]; UInt32 bufferByteSize = kSamplesSize; for (int i = 0; i < kNumberBuffers; ++i) { // 1 OSStatus allocateStatus; allocateStatus = AudioQueueAllocateBuffer ( // 2 queue, // 3 bufferByteSize, // 4 &buffers[i] // 5 ); OSStatus enqueStatus; NSLog(@"allocateStatus = %d" , allocateStatus); enqueStatus = AudioQueueEnqueueBuffer ( // 6 queue, // 7 buffers[i], // 8 0, // 9 NULL // 10 ); NSLog(@"enqueStatus = %d" , enqueStatus); } AudioQueueStart ( // 3 queue, // 4 NULL // 5 ); }