stream媒体从iphone

我需要将audio从麦克风传输到http服务器。
这些录音设置是我所需要的:

NSDictionary *audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt: kAudioFormatULaw],AVFormatIDKey, [NSNumber numberWithFloat:8000.0],AVSampleRateKey,//was 44100.0 [NSData dataWithBytes: &acl length: sizeof( AudioChannelLayout ) ], AVChannelLayoutKey, [NSNumber numberWithInt:1],AVNumberOfChannelsKey, [NSNumber numberWithInt:64000],AVEncoderBitRateKey, nil]; 

API即时编码状态:

发送连续的audiostream到当前浏览的摄像机。 audio需要以64 kbit / s的G711 mu-law编码传输到床边的Axis相机。 发送(这应该是SSL连接服务器中的POST URL):POST / transmitaudio?id = Content-type:audio / basic Content-Length:99999(忽略长度)

以下是我尝试使用的链接列表。

LINK – (SO)基本解释:只有audio单元和audio队列允许通过麦克风录制时的nsdata输出。 不是一个例子,而是一个很好的定义需要什么(audio队列,或audio单元)

LINK – (SO)audiocallback示例| 只包括callback

LINK – (SO)REMOTE IO示例| 没有启动/停止,并保存到文件

LINK – (SO)REMOTE IO示例| 没有回答没有工作

LINK – (SO)基本录音示例| 很好的例子,但logging档案

LINK – (SO)这个问题引导我到InMemoryAudioFile类(无法工作)| 其次是链接到inMemoryFile(或类似的东西),但无法得到它的工作。

链接 – (SO)更多的audio单元和远程IO示例/问题| 得到这个工作,但再次没有停止function,即使当我试图找出是什么呼叫,并使其停止,似乎仍然没有传输到服务器的audio。

LINK – 体面的remoteIO和audio队列示例,但是| 另一个很好的例子,几乎得到它的工作,但有代码(编译器认为它不是obj-c + +)的一些问题,并再次不知道如何获得audio“数据”,而不是一个文件。

LINK – 用于audio队列的Apple文档 有框架问题。 (见下面的问题),但最终无法得到它的工作,但可能没有给这个时间像其他人一样,也许应该有。

LINK – (SO)在尝试实现audio队列/单元时遇到的问题 不是一个例子

LINK – (SO)另一个remoteIO例子| 另一个很好的例子,但不能弄清楚如何得到它的数据,而不是文件。

链接 – 也看起来很有趣,循环缓冲区| 无法弄清楚如何将这与audiocallback

这是我目前正在尝试stream式传输的类。 这似乎工作,虽然在接收端(连接到服务器)的扬声器有静态的。 这似乎表明audio数据格式有问题。

IOS VERSION(减去GCD套接字的委托方法):

 @implementation MicCommunicator { AVAssetWriter * assetWriter; AVAssetWriterInput * assetWriterInput; } @synthesize captureSession = _captureSession; @synthesize output = _output; @synthesize restClient = _restClient; @synthesize uploadAudio = _uploadAudio; @synthesize outputPath = _outputPath; @synthesize sendStream = _sendStream; @synthesize receiveStream = _receiveStream; @synthesize socket = _socket; @synthesize isSocketConnected = _isSocketConnected; -(id)init { if ((self = [super init])) { _receiveStream = [[NSStream alloc]init]; _sendStream = [[NSStream alloc]init]; _socket = [[GCDAsyncSocket alloc] initWithDelegate:self delegateQueue:dispatch_get_main_queue()]; _isSocketConnected = FALSE; _restClient = [RestClient sharedManager]; _uploadAudio = false; NSArray *searchPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); _outputPath = [NSURL fileURLWithPath:[[searchPaths objectAtIndex:0] stringByAppendingPathComponent:@"micOutput.output"]]; NSError * assetError; AudioChannelLayout acl; bzero(&acl, sizeof(acl)); acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono; //kAudioChannelLayoutTag_Stereo; NSDictionary *audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt: kAudioFormatULaw],AVFormatIDKey, [NSNumber numberWithFloat:8000.0],AVSampleRateKey,//was 44100.0 [NSData dataWithBytes: &acl length: sizeof( AudioChannelLayout ) ], AVChannelLayoutKey, [NSNumber numberWithInt:1],AVNumberOfChannelsKey, [NSNumber numberWithInt:64000],AVEncoderBitRateKey, nil]; assetWriterInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioOutputSettings]retain]; [assetWriterInput setExpectsMediaDataInRealTime:YES]; assetWriter = [[AVAssetWriter assetWriterWithURL:_outputPath fileType:AVFileTypeWAVE error:&assetError]retain]; //AVFileTypeAppleM4A if (assetError) { NSLog (@"error initing mic: %@", assetError); return nil; } if ([assetWriter canAddInput:assetWriterInput]) { [assetWriter addInput:assetWriterInput]; } else { NSLog (@"can't add asset writer input...!"); return nil; } } return self; } -(void)dealloc { [_output release]; [_captureSession release]; [_captureSession release]; [assetWriter release]; [assetWriterInput release]; [super dealloc]; } -(void)beginStreaming { NSLog(@"avassetwrter class is %@",NSStringFromClass([assetWriter class])); self.captureSession = [[AVCaptureSession alloc] init]; AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio]; NSError *error = nil; AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error]; if (audioInput) [self.captureSession addInput:audioInput]; else { NSLog(@"No audio input found."); return; } self.output = [[AVCaptureAudioDataOutput alloc] init]; dispatch_queue_t outputQueue = dispatch_queue_create("micOutputDispatchQueue", NULL); [self.output setSampleBufferDelegate:self queue:outputQueue]; dispatch_release(outputQueue); self.uploadAudio = FALSE; [self.captureSession addOutput:self.output]; [assetWriter startWriting]; [self.captureSession startRunning]; } -(void)pauseStreaming { self.uploadAudio = FALSE; } -(void)resumeStreaming { self.uploadAudio = TRUE; } -(void)finishAudioWork { [self dealloc]; } -(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { AudioBufferList audioBufferList; NSMutableData *data= [[NSMutableData alloc] init]; CMBlockBufferRef blockBuffer; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); for (int y = 0; y < audioBufferList.mNumberBuffers; y++) { AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; Float32 *frame = (Float32*)audioBuffer.mData; [data appendBytes:frame length:audioBuffer.mDataByteSize]; } // append [data bytes] to your NSOutputStream // These two lines write to disk, you may not need this, just providing an example [assetWriter startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)]; [assetWriterInput appendSampleBuffer:sampleBuffer]; //start upload audio data if (self.uploadAudio) { if (!self.isSocketConnected) { [self connect]; } NSString *requestStr = [NSString stringWithFormat:@"POST /transmitaudio?id=%@ HTTP/1.0\r\n\r\n",self.restClient.sessionId]; NSData *requestData = [requestStr dataUsingEncoding:NSUTF8StringEncoding]; [self.socket writeData:requestData withTimeout:5 tag:0]; [self.socket writeData:data withTimeout:5 tag:0]; } //stop upload audio data CFRelease(blockBuffer); blockBuffer=NULL; [data release]; } 

和JAVA版本:

 import java.io.BufferedInputStream; import java.io.BufferedOutputStream; import java.io.BufferedReader; import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.OutputStream; import java.io.PrintWriter; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.util.Arrays; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSocket; import javax.net.ssl.SSLSocketFactory; import javax.net.ssl.TrustManager; import javax.net.ssl.X509TrustManager; import android.media.AudioFormat; import android.media.AudioManager; import android.media.AudioRecord; import android.media.AudioTrack; import android.media.MediaRecorder.AudioSource; import android.util.Log; public class AudioWorker extends Thread { private boolean stopped = false; private String host; private int port; private long id=0; boolean run=true; AudioRecord recorder; //ulaw encoder stuff private final static String TAG = "UlawEncoderInputStream"; private final static int MAX_ULAW = 8192; private final static int SCALE_BITS = 16; private InputStream mIn; private int mMax = 0; private final byte[] mBuf = new byte[1024]; private int mBufCount = 0; // should be 0 or 1 private final byte[] mOneByte = new byte[1]; //// /** * Give the thread high priority so that it's not canceled unexpectedly, and start it */ public AudioWorker(String host, int port, long id) { this.host = host; this.port = port; this.id = id; android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); // start(); } @Override public void run() { Log.i("AudioWorker", "Running AudioWorker Thread"); recorder = null; AudioTrack track = null; short[][] buffers = new short[256][160]; int ix = 0; /* * Initialize buffer to hold continuously recorded AudioWorker data, start recording, and start * playback. */ try { int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT); recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10); track = new AudioTrack(AudioManager.STREAM_MUSIC, 8000, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM); recorder.startRecording(); // track.play(); /* * Loops until something outside of this thread stops it. * Reads the data from the recorder and writes it to the AudioWorker track for playback. */ SSLContext sc = SSLContext.getInstance("SSL"); sc.init(null, trustAllCerts, new java.security.SecureRandom()); SSLSocketFactory sslFact = sc.getSocketFactory(); SSLSocket socket = (SSLSocket)sslFact.createSocket(host, port); socket.setSoTimeout(10000); InputStream inputStream = socket.getInputStream(); DataInputStream in = new DataInputStream(new BufferedInputStream(inputStream)); OutputStream outputStream = socket.getOutputStream(); DataOutputStream os = new DataOutputStream(new BufferedOutputStream(outputStream)); PrintWriter socketPrinter = new PrintWriter(os); BufferedReader br = new BufferedReader(new InputStreamReader(in)); // socketPrinter.println("POST /transmitaudio?patient=1333369798370 HTTP/1.0"); socketPrinter.println("POST /transmitaudio?id="+id+" HTTP/1.0"); socketPrinter.println("Content-Type: audio/basic"); socketPrinter.println("Content-Length: 99999"); socketPrinter.println("Connection: Keep-Alive"); socketPrinter.println("Cache-Control: no-cache"); socketPrinter.println(); socketPrinter.flush(); while(!stopped) { Log.i("Map", "Writing new data to buffer"); short[] buffer = buffers[ix++ % buffers.length]; N = recorder.read(buffer,0,buffer.length); track.write(buffer, 0, buffer.length); byte[] bytes2 = new byte[buffer.length * 2]; ByteBuffer.wrap(bytes2).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer); read(bytes2, 0, bytes2.length); os.write(bytes2,0,bytes2.length); // // ByteBuffer byteBuf = ByteBuffer.allocate(2*N); // System.out.println("byteBuf length "+2*N); // int i = 0; // while (buffer.length > i) { // byteBuf.putShort(buffer[i]); // i++; // } // byte[] b = new byte[byteBuf.remaining()]; } os.close(); } catch(Throwable x) { Log.w("AudioWorker", "Error reading voice AudioWorker", x); } /* * Frees the thread's resources after the loop completes so that it can be run again */ finally { recorder.stop(); recorder.release(); track.stop(); track.release(); } } /** * Called from outside of the thread in order to stop the recording/playback loop */ public void close() { stopped = true; } public void resumeThread() { stopped = false; run(); } TrustManager[] trustAllCerts = new TrustManager[]{ new X509TrustManager() { public java.security.cert.X509Certificate[] getAcceptedIssuers() { return null; } public void checkClientTrusted( java.security.cert.X509Certificate[] certs, String authType) { } public void checkServerTrusted( java.security.cert.X509Certificate[] chain, String authType) { for (int j=0; j<chain.length; j++) { System.out.println("Client certificate information:"); System.out.println(" Subject DN: " + chain[j].getSubjectDN()); System.out.println(" Issuer DN: " + chain[j].getIssuerDN()); System.out.println(" Serial number: " + chain[j].getSerialNumber()); System.out.println(""); } } } }; public static void encode(byte[] pcmBuf, int pcmOffset, byte[] ulawBuf, int ulawOffset, int length, int max) { // from 'ulaw' in wikipedia // +8191 to +8159 0x80 // +8158 to +4063 in 16 intervals of 256 0x80 + interval number // +4062 to +2015 in 16 intervals of 128 0x90 + interval number // +2014 to +991 in 16 intervals of 64 0xA0 + interval number // +990 to +479 in 16 intervals of 32 0xB0 + interval number // +478 to +223 in 16 intervals of 16 0xC0 + interval number // +222 to +95 in 16 intervals of 8 0xD0 + interval number // +94 to +31 in 16 intervals of 4 0xE0 + interval number // +30 to +1 in 15 intervals of 2 0xF0 + interval number // 0 0xFF // -1 0x7F // -31 to -2 in 15 intervals of 2 0x70 + interval number // -95 to -32 in 16 intervals of 4 0x60 + interval number // -223 to -96 in 16 intervals of 8 0x50 + interval number // -479 to -224 in 16 intervals of 16 0x40 + interval number // -991 to -480 in 16 intervals of 32 0x30 + interval number // -2015 to -992 in 16 intervals of 64 0x20 + interval number // -4063 to -2016 in 16 intervals of 128 0x10 + interval number // -8159 to -4064 in 16 intervals of 256 0x00 + interval number // -8192 to -8160 0x00 // set scale factors if (max <= 0) max = MAX_ULAW; int coef = MAX_ULAW * (1 << SCALE_BITS) / max; for (int i = 0; i < length; i++) { int pcm = (0xff & pcmBuf[pcmOffset++]) + (pcmBuf[pcmOffset++] << 8); pcm = (pcm * coef) >> SCALE_BITS; int ulaw; if (pcm >= 0) { ulaw = pcm <= 0 ? 0xff : pcm <= 30 ? 0xf0 + (( 30 - pcm) >> 1) : pcm <= 94 ? 0xe0 + (( 94 - pcm) >> 2) : pcm <= 222 ? 0xd0 + (( 222 - pcm) >> 3) : pcm <= 478 ? 0xc0 + (( 478 - pcm) >> 4) : pcm <= 990 ? 0xb0 + (( 990 - pcm) >> 5) : pcm <= 2014 ? 0xa0 + ((2014 - pcm) >> 6) : pcm <= 4062 ? 0x90 + ((4062 - pcm) >> 7) : pcm <= 8158 ? 0x80 + ((8158 - pcm) >> 8) : 0x80; } else { ulaw = -1 <= pcm ? 0x7f : -31 <= pcm ? 0x70 + ((pcm - -31) >> 1) : -95 <= pcm ? 0x60 + ((pcm - -95) >> 2) : -223 <= pcm ? 0x50 + ((pcm - -223) >> 3) : -479 <= pcm ? 0x40 + ((pcm - -479) >> 4) : -991 <= pcm ? 0x30 + ((pcm - -991) >> 5) : -2015 <= pcm ? 0x20 + ((pcm - -2015) >> 6) : -4063 <= pcm ? 0x10 + ((pcm - -4063) >> 7) : -8159 <= pcm ? 0x00 + ((pcm - -8159) >> 8) : 0x00; } ulawBuf[ulawOffset++] = (byte)ulaw; } } public static int maxAbsPcm(byte[] pcmBuf, int offset, int length) { int max = 0; for (int i = 0; i < length; i++) { int pcm = (0xff & pcmBuf[offset++]) + (pcmBuf[offset++] << 8); if (pcm < 0) pcm = -pcm; if (pcm > max) max = pcm; } return max; } public int read(byte[] buf, int offset, int length) throws IOException { if (recorder == null) throw new IllegalStateException("not open"); // return at least one byte, but try to fill 'length' while (mBufCount < 2) { int n = recorder.read(mBuf, mBufCount, Math.min(length * 2, mBuf.length - mBufCount)); if (n == -1) return -1; mBufCount += n; } // compand data int n = Math.min(mBufCount / 2, length); encode(mBuf, 0, buf, offset, n, mMax); // move data to bottom of mBuf mBufCount -= n * 2; for (int i = 0; i < mBufCount; i++) mBuf[i] = mBuf[i + n * 2]; return n; } } 

我在这个话题上的工作是惊人的,漫长的。 我终于得到了这个工作,但它可能是黑客入侵。 因此,我会在发布答案之前列出一些警告:

  1. 缓冲区之间仍然存在点击噪音

  2. 由于我在obj-c ++类中使用obj-c类的方式,我得到了警告,所以在这里出现了一些问题(但是从我的研究中,使用一个池的方法与释放相同,所以我不认为这很重要):

    对象0x13cd20类__NSCFString autoreleased没有到位的地方 – 只是泄漏 – 打破objc_autoreleaseNoPool()debugging

  3. 为了得到这个工作,我不得不从SpeakHereController(见下文)注释掉所有的AQPlayer引用,因为我无法修复任何其他的错误。 这对我来说并不重要,因为我只是录音

所以上面的主要答案是在AVAssetWriter中有一个错误,它阻止了它追加字节和写入audio数据。 联系苹果支持后,我终于find了这个,让他们通知我这件事。 据我所知该错误是特定于ulaw和AVAssetWriter,虽然我havnt尝试了许多其他格式来validation。
为了响应这个唯一的其他select是/是使用AudioQueues。 之前我曾尝试的一些东西,但带来了一堆问题。 最大的问题是我在obj-c ++中缺乏知识。 下面这个让事情起作用的类是从speakHere例子中稍作改动,这样audio就被ulaw格式化了。 其他问题是为了让所有文件都能很好地播放。 但是,这很容易通过将链中的所有文件名更改为.mm。 接下来的问题是试图使用和谐的课程。 这仍然是一个在制品,并与2号警告联系在一起。但是我的基本解决scheme是使用SpeakHereController(也包含在说明中)而不是直接访问AQRecorder。

反正这里是代码:

从obj-c类使用SpeakHereController

。H

 @property(nonatomic,strong) SpeakHereController * recorder; 

.mm

 [init method] //AQRecorder wrapper (SpeakHereController) allocation _recorder = [[SpeakHereController alloc]init]; //AQRecorder wrapper (SpeakHereController) initialization //technically this class is a controller and thats why its init method is awakeFromNib [_recorder awakeFromNib]; [recording] bool buttonState = self.audioRecord.isSelected; [self.audioRecord setSelected:!buttonState]; if ([self.audioRecord isSelected]) { [self.recorder startRecord]; }else { [self.recorder stopRecord]; } 

SpeakHereController

 #import "SpeakHereController.h" @implementation SpeakHereController @synthesize player; @synthesize recorder; @synthesize btn_record; @synthesize btn_play; @synthesize fileDescription; @synthesize lvlMeter_in; @synthesize playbackWasInterrupted; char *OSTypeToStr(char *buf, OSType t) { char *p = buf; char str[4], *q = str; *(UInt32 *)str = CFSwapInt32(t); for (int i = 0; i < 4; ++i) { if (isprint(*q) && *q != '\\') *p++ = *q++; else { sprintf(p, "\\x%02x", *q++); p += 4; } } *p = '\0'; return buf; } -(void)setFileDescriptionForFormat: (CAStreamBasicDescription)format withName:(NSString*)name { char buf[5]; const char *dataFormat = OSTypeToStr(buf, format.mFormatID); NSString* description = [[NSString alloc] initWithFormat:@"(%d ch. %s @ %g Hz)", format.NumberChannels(), dataFormat, format.mSampleRate, nil]; fileDescription.text = description; [description release]; } #pragma mark Playback routines -(void)stopPlayQueue { // player->StopQueue(); [lvlMeter_in setAq: nil]; btn_record.enabled = YES; } -(void)pausePlayQueue { // player->PauseQueue(); playbackWasPaused = YES; } -(void)startRecord { // recorder = new AQRecorder(); if (recorder->IsRunning()) // If we are currently recording, stop and save the file. { [self stopRecord]; } else // If we're not recording, start. { // btn_play.enabled = NO; // Set the button's state to "stop" // btn_record.title = @"Stop"; // Start the recorder recorder->StartRecord(CFSTR("recordedFile.caf")); [self setFileDescriptionForFormat:recorder->DataFormat() withName:@"Recorded File"]; // Hook the level meter up to the Audio Queue for the recorder // [lvlMeter_in setAq: recorder->Queue()]; } } - (void)stopRecord { // Disconnect our level meter from the audio queue // [lvlMeter_in setAq: nil]; recorder->StopRecord(); // dispose the previous playback queue // player->DisposeQueue(true); // now create a new queue for the recorded file recordFilePath = (CFStringRef)[NSTemporaryDirectory() stringByAppendingPathComponent: @"recordedFile.caf"]; // player->CreateQueueForFile(recordFilePath); // Set the button's state back to "record" // btn_record.title = @"Record"; // btn_play.enabled = YES; } - (IBAction)play:(id)sender { if (player->IsRunning()) { if (playbackWasPaused) { // OSStatus result = player->StartQueue(true); // if (result == noErr) // [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueResumed" object:self]; } else // [self stopPlayQueue]; nil; } else { // OSStatus result = player->StartQueue(false); // if (result == noErr) // [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueResumed" object:self]; } } - (IBAction)record:(id)sender { if (recorder->IsRunning()) // If we are currently recording, stop and save the file. { [self stopRecord]; } else // If we're not recording, start. { // btn_play.enabled = NO; // // // Set the button's state to "stop" // btn_record.title = @"Stop"; // Start the recorder recorder->StartRecord(CFSTR("recordedFile.caf")); [self setFileDescriptionForFormat:recorder->DataFormat() withName:@"Recorded File"]; // Hook the level meter up to the Audio Queue for the recorder [lvlMeter_in setAq: recorder->Queue()]; } } #pragma mark AudioSession listeners void interruptionListener( void * inClientData, UInt32 inInterruptionState) { SpeakHereController *THIS = (SpeakHereController*)inClientData; if (inInterruptionState == kAudioSessionBeginInterruption) { if (THIS->recorder->IsRunning()) { [THIS stopRecord]; } else if (THIS->player->IsRunning()) { //the queue will stop itself on an interruption, we just need to update the UI [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueStopped" object:THIS]; THIS->playbackWasInterrupted = YES; } } else if ((inInterruptionState == kAudioSessionEndInterruption) && THIS->playbackWasInterrupted) { // we were playing back when we were interrupted, so reset and resume now // THIS->player->StartQueue(true); [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueResumed" object:THIS]; THIS->playbackWasInterrupted = NO; } } void propListener( void * inClientData, AudioSessionPropertyID inID, UInt32 inDataSize, const void * inData) { SpeakHereController *THIS = (SpeakHereController*)inClientData; if (inID == kAudioSessionProperty_AudioRouteChange) { CFDictionaryRef routeDictionary = (CFDictionaryRef)inData; //CFShow(routeDictionary); CFNumberRef reason = (CFNumberRef)CFDictionaryGetValue(routeDictionary, CFSTR(kAudioSession_AudioRouteChangeKey_Reason)); SInt32 reasonVal; CFNumberGetValue(reason, kCFNumberSInt32Type, &reasonVal); if (reasonVal != kAudioSessionRouteChangeReason_CategoryChange) { /*CFStringRef oldRoute = (CFStringRef)CFDictionaryGetValue(routeDictionary, CFSTR(kAudioSession_AudioRouteChangeKey_OldRoute)); if (oldRoute) { printf("old route:\n"); CFShow(oldRoute); } else printf("ERROR GETTING OLD AUDIO ROUTE!\n"); CFStringRef newRoute; UInt32 size; size = sizeof(CFStringRef); OSStatus error = AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &size, &newRoute); if (error) printf("ERROR GETTING NEW AUDIO ROUTE! %d\n", error); else { printf("new route:\n"); CFShow(newRoute); }*/ if (reasonVal == kAudioSessionRouteChangeReason_OldDeviceUnavailable) { if (THIS->player->IsRunning()) { [THIS pausePlayQueue]; [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueStopped" object:THIS]; } } // stop the queue if we had a non-policy route change if (THIS->recorder->IsRunning()) { [THIS stopRecord]; } } } else if (inID == kAudioSessionProperty_AudioInputAvailable) { if (inDataSize == sizeof(UInt32)) { UInt32 isAvailable = *(UInt32*)inData; // disable recording if input is not available THIS->btn_record.enabled = (isAvailable > 0) ? YES : NO; } } } #pragma mark Initialization routines - (void)awakeFromNib { // Allocate our singleton instance for the recorder & player object recorder = new AQRecorder(); player = nil;//new AQPlayer(); OSStatus error = AudioSessionInitialize(NULL, NULL, interruptionListener, self); if (error) printf("ERROR INITIALIZING AUDIO SESSION! %d\n", error); else { UInt32 category = kAudioSessionCategory_PlayAndRecord; error = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(category), &category); if (error) printf("couldn't set audio category!"); error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, propListener, self); if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", error); UInt32 inputAvailable = 0; UInt32 size = sizeof(inputAvailable); // we do not want to allow recording if input is not available error = AudioSessionGetProperty(kAudioSessionProperty_AudioInputAvailable, &size, &inputAvailable); if (error) printf("ERROR GETTING INPUT AVAILABILITY! %d\n", error); // btn_record.enabled = (inputAvailable) ? YES : NO; // we also need to listen to see if input availability changes error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioInputAvailable, propListener, self); if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", error); error = AudioSessionSetActive(true); if (error) printf("AudioSessionSetActive (true) failed"); } // [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playbackQueueStopped:) name:@"playbackQueueStopped" object:nil]; // [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playbackQueueResumed:) name:@"playbackQueueResumed" object:nil]; // UIColor *bgColor = [[UIColor alloc] initWithRed:.39 green:.44 blue:.57 alpha:.5]; // [lvlMeter_in setBackgroundColor:bgColor]; // [lvlMeter_in setBorderColor:bgColor]; // [bgColor release]; // disable the play button since we have no recording to play yet // btn_play.enabled = NO; // playbackWasInterrupted = NO; // playbackWasPaused = NO; } # pragma mark Notification routines - (void)playbackQueueStopped:(NSNotification *)note { btn_play.title = @"Play"; [lvlMeter_in setAq: nil]; btn_record.enabled = YES; } - (void)playbackQueueResumed:(NSNotification *)note { btn_play.title = @"Stop"; btn_record.enabled = NO; [lvlMeter_in setAq: player->Queue()]; } #pragma mark Cleanup - (void)dealloc { [btn_record release]; [btn_play release]; [fileDescription release]; [lvlMeter_in release]; // delete player; delete recorder; [super dealloc]; } @end 

AQRecorder(.h有2行重要性

 #define kNumberRecordBuffers 3 #define kBufferDurationSeconds 5.0 

 #include "AQRecorder.h" //#include "UploadAudioWrapperInterface.h" //#include "RestClient.h" RestClient * restClient; NSData* data; // ____________________________________________________________________________________ // Determine the size, in bytes, of a buffer necessary to represent the supplied number // of seconds of audio data. int AQRecorder::ComputeRecordBufferSize(const AudioStreamBasicDescription *format, float seconds) { int packets, frames, bytes = 0; try { frames = (int)ceil(seconds * format->mSampleRate); if (format->mBytesPerFrame > 0) bytes = frames * format->mBytesPerFrame; else { UInt32 maxPacketSize; if (format->mBytesPerPacket > 0) maxPacketSize = format->mBytesPerPacket; // constant packet size else { UInt32 propertySize = sizeof(maxPacketSize); XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MaximumOutputPacketSize, &maxPacketSize, &propertySize), "couldn't get queue's maximum output packet size"); } if (format->mFramesPerPacket > 0) packets = frames / format->mFramesPerPacket; else packets = frames; // worst-case scenario: 1 frame in a packet if (packets == 0) // sanity check packets = 1; bytes = packets * maxPacketSize; } } catch (CAXException e) { char buf[256]; fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf)); return 0; } return bytes; } // ____________________________________________________________________________________ // AudioQueue callback function, called when an input buffers has been filled. void AQRecorder::MyInputBufferHandler( void * inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp * inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription* inPacketDesc) { AQRecorder *aqr = (AQRecorder *)inUserData; try { if (inNumPackets > 0) { // write packets to file // XThrowIfError(AudioFileWritePackets(aqr->mRecordFile, FALSE, inBuffer->mAudioDataByteSize, // inPacketDesc, aqr->mRecordPacket, &inNumPackets, inBuffer->mAudioData), // "AudioFileWritePackets failed"); aqr->mRecordPacket += inNumPackets; // int numBytes = inBuffer->mAudioDataByteSize; // SInt8 *testBuffer = (SInt8*)inBuffer->mAudioData; // // for (int i=0; i < numBytes; i++) // { // SInt8 currentData = testBuffer[i]; // printf("Current data in testbuffer is %d", currentData); // // NSData * temp = [NSData dataWithBytes:currentData length:sizeof(currentData)]; // } data=[[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]retain]; [restClient uploadAudioData:data url:nil]; } // if we're not stopping, re-enqueue the buffer so that it gets filled again if (aqr->IsRunning()) XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed"); } catch (CAXException e) { char buf[256]; fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf)); } } AQRecorder::AQRecorder() { mIsRunning = false; mRecordPacket = 0; data = [[NSData alloc]init]; restClient = [[RestClient sharedManager]retain]; } AQRecorder::~AQRecorder() { AudioQueueDispose(mQueue, TRUE); AudioFileClose(mRecordFile); if (mFileName){ CFRelease(mFileName); } [restClient release]; [data release]; } // ____________________________________________________________________________________ // Copy a queue's encoder's magic cookie to an audio file. void AQRecorder::CopyEncoderCookieToFile() { UInt32 propertySize; // get the magic cookie, if any, from the converter OSStatus err = AudioQueueGetPropertySize(mQueue, kAudioQueueProperty_MagicCookie, &propertySize); // we can get a noErr result and also a propertySize == 0 // -- if the file format does support magic cookies, but this file doesn't have one. if (err == noErr && propertySize > 0) { Byte *magicCookie = new Byte[propertySize]; UInt32 magicCookieSize; XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MagicCookie, magicCookie, &propertySize), "get audio converter's magic cookie"); magicCookieSize = propertySize; // the converter lies and tell us the wrong size // now set the magic cookie on the output file UInt32 willEatTheCookie = false; // the converter wants to give us one; will the file take it? err = AudioFileGetPropertyInfo(mRecordFile, kAudioFilePropertyMagicCookieData, NULL, &willEatTheCookie); if (err == noErr && willEatTheCookie) { err = AudioFileSetProperty(mRecordFile, kAudioFilePropertyMagicCookieData, magicCookieSize, magicCookie); XThrowIfError(err, "set audio file's magic cookie"); } delete[] magicCookie; } } void AQRecorder::SetupAudioFormat(UInt32 inFormatID) { memset(&mRecordFormat, 0, sizeof(mRecordFormat)); UInt32 size = sizeof(mRecordFormat.mSampleRate); XThrowIfError(AudioSessionGetProperty( kAudioSessionProperty_CurrentHardwareSampleRate, &size, &mRecordFormat.mSampleRate), "couldn't get hardware sample rate"); //override samplearate to 8k from device sample rate mRecordFormat.mSampleRate = 8000.0; size = sizeof(mRecordFormat.mChannelsPerFrame); XThrowIfError(AudioSessionGetProperty( kAudioSessionProperty_CurrentHardwareInputNumberChannels, &size, &mRecordFormat.mChannelsPerFrame), "couldn't get input channel count"); // mRecordFormat.mChannelsPerFrame = 1; mRecordFormat.mFormatID = inFormatID; if (inFormatID == kAudioFormatLinearPCM) { // if we want pcm, default to signed 16-bit little-endian mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked; mRecordFormat.mBitsPerChannel = 16; mRecordFormat.mBytesPerPacket = mRecordFormat.mBytesPerFrame = (mRecordFormat.mBitsPerChannel / 8) * mRecordFormat.mChannelsPerFrame; mRecordFormat.mFramesPerPacket = 1; } if (inFormatID == kAudioFormatULaw) { // NSLog(@"is ulaw"); mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger; mRecordFormat.mSampleRate = 8000.0; // mRecordFormat.mFormatFlags = 0; mRecordFormat.mFramesPerPacket = 1; mRecordFormat.mChannelsPerFrame = 1; mRecordFormat.mBitsPerChannel = 16;//was 8 mRecordFormat.mBytesPerPacket = 1; mRecordFormat.mBytesPerFrame = 1; } } NSString * GetDocumentDirectory(void) { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil; return basePath; } void AQRecorder::StartRecord(CFStringRef inRecordFile) { int i, bufferByteSize; UInt32 size; CFURLRef url; try { mFileName = CFStringCreateCopy(kCFAllocatorDefault, inRecordFile); // specify the recording format SetupAudioFormat(kAudioFormatULaw /*kAudioFormatLinearPCM*/); // create the queue XThrowIfError(AudioQueueNewInput( &mRecordFormat, MyInputBufferHandler, this /* userData */, NULL /* run loop */, NULL /* run loop mode */, 0 /* flags */, &mQueue), "AudioQueueNewInput failed"); // get the record format back from the queue's audio converter -- // the file may require a more specific stream description than was necessary to create the encoder. mRecordPacket = 0; size = sizeof(mRecordFormat); XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription, &mRecordFormat, &size), "couldn't get queue's format"); NSString *basePath = GetDocumentDirectory(); NSString *recordFile = [basePath /*NSTemporaryDirectory()*/ stringByAppendingPathComponent: (NSString*)inRecordFile]; url = CFURLCreateWithString(kCFAllocatorDefault, (CFStringRef)recordFile, NULL); // create the audio file XThrowIfError(AudioFileCreateWithURL(url, kAudioFileCAFType, &mRecordFormat, kAudioFileFlags_EraseFile, &mRecordFile), "AudioFileCreateWithURL failed"); CFRelease(url); // copy the cookie first to give the file object as much info as we can about the data going in // not necessary for pcm, but required for some compressed audio CopyEncoderCookieToFile(); // allocate and enqueue buffers bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSeconds); // enough bytes for half a second for (i = 0; i < kNumberRecordBuffers; ++i) { XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]), "AudioQueueAllocateBuffer failed"); XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL), "AudioQueueEnqueueBuffer failed"); } // start the queue mIsRunning = true; XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed"); } catch (CAXException &e) { char buf[256]; fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf)); } catch (...) { fprintf(stderr, "An unknown error occurred\n"); } } void AQRecorder::StopRecord() { // end recording mIsRunning = false; // XThrowIfError(AudioQueueReset(mQueue), "AudioQueueStop failed"); XThrowIfError(AudioQueueStop(mQueue, true), "AudioQueueStop failed"); // a codec may update its cookie at the end of an encoding session, so reapply it to the file now CopyEncoderCookieToFile(); if (mFileName) { CFRelease(mFileName); mFileName = NULL; } AudioQueueDispose(mQueue, true); AudioFileClose(mRecordFile); } 

Please feel free to comment or refine my answer, I will accept it as the answer if its a better solution. Please note this was my first attempt and Im sure it is not the most elegant or proper solution.

You could use the gamekit Framework? Then send the audio over bluetooth. There are examples in the ios developer library