用audio单元录制audio,每个文件分割为X秒

我已经在这几天了。 我不是很熟悉框架的audio单元层。 有人能指点我一个完整的例子,我怎么可以让用户logging,而不是写在x间隔数量的文件。 例如,用户每10秒钟logging一次,我想写入一个文件,11日秒,它会写入下一个文件,在21秒钟,这是一回事。 所以当我录制25秒的audio文件时,它会产生3个不同的文件。

我已经用AVCapture试过了,但是在中间产生了点击和popup。 我读过它,这是由于读取和写入操作之间的毫秒数。 我试过audio队列服务,但知道我正在处理的应用程序,我需要完全控制audio层。 所以我决定和Audio Unit一起去

我想我越来越近了…还是相当迷茫。 我结束了使用惊人的audio引擎(TAAE)。 我现在正在看AEAudioReceiver,我的callback代码看起来像这样。 我认为逻辑上是正确的,但我不认为这是正确实施的。

手头的任务:以AAC格式logging〜5秒段。

尝试:使用AEAudioRecievercallback并将AudioBufferList存储在循环缓冲区中。 跟踪录音机类中收到的audio的秒数; 一旦它通过了5秒的标志(可以是一点点但不是6秒)。 调用Obj-c方法使用AEAudioFileWriter写入文件

结果:没有工作,录音听起来很慢,很多噪音不断, 我可以听到一些有声录音。 所以我知道一些数据在那里,但这就像我失去了大量的数据。 我甚至不知道如何debugging(我会继续尝试,但现在相当迷茫)。

另一个项目是转换为AAC,我首先以PCM格式写入文件,而不是转换为AAC,还是只能将audio段转换为AAC?

感谢您的帮助!

—–循环缓冲区初始化—–

//trying to get 5 seconds audio, how do I know what the length is if I don't know the frame size yet? and is that even the right question to ask? TPCircularBufferInit(&_buffer, 1024 * 256); 

—– AEAudioReceivercallback——

 static void receiverCallback(__unsafe_unretained MyAudioRecorder *THIS, __unsafe_unretained AEAudioController *audioController, void *source, const AudioTimeStamp *time, UInt32 frames, AudioBufferList *audio) { //store the audio into the buffer TPCircularBufferCopyAudioBufferList(&THIS->_buffer, audio, time, kTPCircularBufferCopyAll, NULL); //increase the time interval to track by THIS THIS.numberOfSecondInCurrentRecording += AEConvertFramesToSeconds(THIS.audioController, frames); //if number of seconds passed an interval of 5 seconds, than write the last 5 seconds of the buffer to a file if (THIS.numberOfSecondInCurrentRecording > 5 * THIS->_currentSegment + 1) { NSLog(@"Segment %d is full, writing file", THIS->_currentSegment); [THIS writeBufferToFile]; //segment tracking variables THIS->_numberOfReceiverLoop = 0; THIS.lastTimeStamp = nil; THIS->_currentSegment += 1; } else { THIS->_numberOfReceiverLoop += 1; } // Do something with 'audio' if (!THIS.lastTimeStamp) { THIS.lastTimeStamp = (AudioTimeStamp *)time; } } 

—-写入文件(MyAudioRecorderClass里面的方法)—-

 - (void)writeBufferToFileHandler { NSString *documentsFolder = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; NSString *filePath = [documentsFolder stringByAppendingPathComponent:[NSString stringWithFormat:@"Segment_%d.aiff", _currentSegment]]; NSError *error = nil; //setup audio writer, should the buffer be converted to aac first or save the file than convert; and how the heck do you do that? AEAudioFileWriter *writeFile = [[AEAudioFileWriter alloc] initWithAudioDescription:_audioController.inputAudioDescription]; [writeFile beginWritingToFileAtPath:filePath fileType:kAudioFileAIFFType error:&error]; if (error) { NSLog(@"Error in init. the file: %@", error); return; } int i = 1; //loop to write all the AudioBufferLists that is in the Circular Buffer; retrieve the ones based off of the _lastTimeStamp; but I had it in NULL too and worked the same way. while (1) { //NSLog(@"Processing buffer file list for segment [%d] and buffer index [%d]", _currentSegment, i); i += 1; // Discard any buffers with an incompatible format, in the event of a format change AudioBufferList *nextBuffer = TPCircularBufferNextBufferList(&_buffer, _lastTimeStamp); Float32 *frame = (Float32*) &nextBuffer->mBuffers[0].mData; //if buffer runs out, than we are done writing it and exit loop to close the file if ( !nextBuffer ) { NSLog(@"Ran out of frames, there were [%d] AudioBufferList", i - 1); break; } //Adding audio using AudioFileWriter, is the length correct? OSStatus status = AEAudioFileWriterAddAudio(writeFile, nextBuffer, sizeof(nextBuffer->mBuffers[0].mDataByteSize)); if (status) { NSLog(@"Writing Error? %d", status); } //consume/clear the buffer TPCircularBufferConsumeNextBufferList(&_buffer); } //close the file and hope it worked [writeFile finishWriting]; } 

—–audio控制器AudioStreamBasicDescription ——

 //interleaved16BitStereoAudioDescription AudioStreamBasicDescription audioDescription; memset(&audioDescription, 0, sizeof(audioDescription)); audioDescription.mFormatID = kAudioFormatLinearPCM; audioDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian; audioDescription.mChannelsPerFrame = 2; audioDescription.mBytesPerPacket = sizeof(SInt16)*audioDescription.mChannelsPerFrame; audioDescription.mFramesPerPacket = 1; audioDescription.mBytesPerFrame = sizeof(SInt16)*audioDescription.mChannelsPerFrame; audioDescription.mBitsPerChannel = 8 * sizeof(SInt16); audioDescription.mSampleRate = 44100.0; 
Interesting Posts