我可以使用AVAudioEngine从文件中读取,处理audio单元并写入文件,比实时更快吗?

我正在使用一个iOS应用程序,使用AVAudioEngine来处理各种事情,包括将audiologging到文件,使用audio单元对该audio应用效果以及播放应用了效果的audio。 我使用一个水龙头也写输出到一个文件。 当这完成后,它会在audio播放时实时写入文件。

是否可以设置一个AVAudioEnginegraphics来读取文件,用audio单元处理声音,然后输出到文件,但比实时更快(即,硬件可以像处理它一样快)? 这个用例应该是输出几分钟的音效,而且我肯定不会等待几分钟才能处理。

编辑:这里是我用来设置AVAudioEngine的graphics,并播放声音文件的代码:

AVAudioEngine* engine = [[AVAudioEngine alloc] init]; AVAudioPlayerNode* player = [[AVAudioPlayerNode alloc] init]; [engine attachNode:player]; self.player = player; self.engine = engine; if (!self.distortionEffect) { self.distortionEffect = [[AVAudioUnitDistortion alloc] init]; [self.engine attachNode:self.distortionEffect]; [self.engine connect:self.player to:self.distortionEffect format:[self.distortionEffect outputFormatForBus:0]]; AVAudioMixerNode* mixer = [self.engine mainMixerNode]; [self.engine connect:self.distortionEffect to:mixer format:[mixer outputFormatForBus:0]]; } [self.distortionEffect loadFactoryPreset:AVAudioUnitDistortionPresetDrumsBitBrush]; NSError* error; if (![self.engine startAndReturnError:&error]) { NSLog(@"error: %@", error); } else { NSURL* fileURL = [[NSBundle mainBundle] URLForResource:@"test2" withExtension:@"mp3"]; AVAudioFile* file = [[AVAudioFile alloc] initForReading:fileURL error:&error]; if (error) { NSLog(@"error: %@", error); } else { [self.player scheduleFile:file atTime:nil completionHandler:nil]; [self.player play]; } } 

上面的代码在test2.mp3文件中播放声音,实时应用AVAudioUnitDistortionPresetDrumsBitBrush失真预设。

然后我通过在[self.player play]之后添加这些行来修改上面的代码:

  [self.engine stop]; [self renderAudioAndWriteToFile]; 

我修改了Vladimir提供的renderAudioAndWriteToFile方法,这样就不用在第一行中分配一个新的AVAudioEngine,而只是使用已经设置好的self.engine。

但是,在renderAudioAndWriteToFile中,它会logging“无法呈现audio单元”,因为AudioUnitRender正在返回kAudioUnitErr_Uninitialized的状态。

编辑2 :我应该提到,我非常乐意将我发布的AVAudioEngine代码转换为使用C apis,如果这样做会更容易。 但是,我希望代码生成与AVAudioEngine代码相同的输出(包括使用上面显示的工厂预设)。

  1. configuration您的引擎和播放器节点。
  2. 调用play器节点的play方法。
  3. 暂停你的引擎。
  4. 使用此方法从AVAudioOutputNode( audioEngine.outputNode )中获取audio单元。
  5. 使用AudioUnitRender在周期内从audio单元进行渲染,并使用扩展audio文件服务将audio缓冲区列表写入文件 。

例:

audio引擎configuration

 - (void)configureAudioEngine { self.engine = [[AVAudioEngine alloc] init]; self.playerNode = [[AVAudioPlayerNode alloc] init]; [self.engine attachNode:self.playerNode]; AVAudioUnitDistortion *distortionEffect = [[AVAudioUnitDistortion alloc] init]; [self.engine attachNode:distortionEffect]; [self.engine connect:self.playerNode to:distortionEffect format:[distortionEffect outputFormatForBus:0]]; self.mixer = [self.engine mainMixerNode]; [self.engine connect:distortionEffect to:self.mixer format:[self.mixer outputFormatForBus:0]]; [distortionEffect loadFactoryPreset:AVAudioUnitDistortionPresetDrumsBitBrush]; NSError* error; if (![self.engine startAndReturnError:&error]) NSLog(@"Can't start engine: %@", error); else [self scheduleFileToPlay]; } - (void)scheduleFileToPlay { NSError* error; NSURL *fileURL = [[NSBundle mainBundle] URLForResource:@"filename" withExtension:@"m4a"]; self.file = [[AVAudioFile alloc] initForReading:fileURL error:&error]; if (self.file) [self.playerNode scheduleFile:self.file atTime:nil completionHandler:nil]; else NSLog(@"Can't read file: %@", error); } 

渲染方法

 - (void)renderAudioAndWriteToFile { [self.playerNode play]; [self.engine pause]; AVAudioOutputNode *outputNode = self.engine.outputNode; AudioStreamBasicDescription const *audioDescription = [outputNode outputFormatForBus:0].streamDescription; NSString *path = [self filePath]; ExtAudioFileRef audioFile = [self createAndSetupExtAudioFileWithASBD:audioDescription andFilePath:path]; if (!audioFile) return; AVURLAsset *asset = [AVURLAsset assetWithURL:self.file.url]; NSTimeInterval duration = CMTimeGetSeconds(asset.duration); NSUInteger lengthInFrames = duration * audioDescription->mSampleRate; const NSUInteger kBufferLength = 4096; AudioBufferList *bufferList = AEAllocateAndInitAudioBufferList(*audioDescription, kBufferLength); AudioTimeStamp timeStamp; memset (&timeStamp, 0, sizeof(timeStamp)); timeStamp.mFlags = kAudioTimeStampSampleTimeValid; OSStatus status = noErr; for (NSUInteger i = kBufferLength; i < lengthInFrames; i += kBufferLength) { status = [self renderToBufferList:bufferList writeToFile:audioFile bufferLength:kBufferLength timeStamp:&timeStamp]; if (status != noErr) break; } if (status == noErr && timeStamp.mSampleTime < lengthInFrames) { NSUInteger restBufferLength = (NSUInteger) (lengthInFrames - timeStamp.mSampleTime); AudioBufferList *restBufferList = AEAllocateAndInitAudioBufferList(*audioDescription, restBufferLength); status = [self renderToBufferList:restBufferList writeToFile:audioFile bufferLength:restBufferLength timeStamp:&timeStamp]; AEFreeAudioBufferList(restBufferList); } AEFreeAudioBufferList(bufferList); ExtAudioFileDispose(audioFile); if (status != noErr) NSLog(@"An error has occurred"); else NSLog(@"Finished writing to file at path: %@", path); } - (NSString *)filePath { NSArray *documentsFolders = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *fileName = [NSString stringWithFormat:@"%@.m4a", [[NSUUID UUID] UUIDString]]; NSString *path = [documentsFolders[0] stringByAppendingPathComponent:fileName]; return path; } - (ExtAudioFileRef)createAndSetupExtAudioFileWithASBD:(AudioStreamBasicDescription const *)audioDescription andFilePath:(NSString *)path { AudioStreamBasicDescription destinationFormat; memset(&destinationFormat, 0, sizeof(destinationFormat)); destinationFormat.mChannelsPerFrame = audioDescription->mChannelsPerFrame; destinationFormat.mSampleRate = audioDescription->mSampleRate; destinationFormat.mFormatID = kAudioFormatMPEG4AAC; ExtAudioFileRef audioFile; OSStatus status = ExtAudioFileCreateWithURL( (__bridge CFURLRef) [NSURL fileURLWithPath:path], kAudioFileM4AType, &destinationFormat, NULL, kAudioFileFlags_EraseFile, &audioFile ); if (status != noErr) { NSLog(@"Can not create ext audio file"); return nil; } UInt32 codecManufacturer = kAppleSoftwareAudioCodecManufacturer; status = ExtAudioFileSetProperty( audioFile, kExtAudioFileProperty_CodecManufacturer, sizeof(UInt32), &codecManufacturer ); status = ExtAudioFileSetProperty( audioFile, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), audioDescription ); status = ExtAudioFileWriteAsync(audioFile, 0, NULL); if (status != noErr) { NSLog(@"Can not setup ext audio file"); return nil; } return audioFile; } - (OSStatus)renderToBufferList:(AudioBufferList *)bufferList writeToFile:(ExtAudioFileRef)audioFile bufferLength:(NSUInteger)bufferLength timeStamp:(AudioTimeStamp *)timeStamp { [self clearBufferList:bufferList]; AudioUnit outputUnit = self.engine.outputNode.audioUnit; OSStatus status = AudioUnitRender(outputUnit, 0, timeStamp, 0, bufferLength, bufferList); if (status != noErr) { NSLog(@"Can not render audio unit"); return status; } timeStamp->mSampleTime += bufferLength; status = ExtAudioFileWrite(audioFile, bufferLength, bufferList); if (status != noErr) NSLog(@"Can not write audio to file"); return status; } - (void)clearBufferList:(AudioBufferList *)bufferList { for (int bufferIndex = 0; bufferIndex < bufferList->mNumberBuffers; bufferIndex++) { memset(bufferList->mBuffers[bufferIndex].mData, 0, bufferList->mBuffers[bufferIndex].mDataByteSize); } } 

我使用了这个很酷的框架的一些function:

 AudioBufferList *AEAllocateAndInitAudioBufferList(AudioStreamBasicDescription audioFormat, int frameCount) { int numberOfBuffers = audioFormat.mFormatFlags & kAudioFormatFlagIsNonInterleaved ? audioFormat.mChannelsPerFrame : 1; int channelsPerBuffer = audioFormat.mFormatFlags & kAudioFormatFlagIsNonInterleaved ? 1 : audioFormat.mChannelsPerFrame; int bytesPerBuffer = audioFormat.mBytesPerFrame * frameCount; AudioBufferList *audio = malloc(sizeof(AudioBufferList) + (numberOfBuffers-1)*sizeof(AudioBuffer)); if ( !audio ) { return NULL; } audio->mNumberBuffers = numberOfBuffers; for ( int i=0; i<numberOfBuffers; i++ ) { if ( bytesPerBuffer > 0 ) { audio->mBuffers[i].mData = calloc(bytesPerBuffer, 1); if ( !audio->mBuffers[i].mData ) { for ( int j=0; j<i; j++ ) free(audio->mBuffers[j].mData); free(audio); return NULL; } } else { audio->mBuffers[i].mData = NULL; } audio->mBuffers[i].mDataByteSize = bytesPerBuffer; audio->mBuffers[i].mNumberChannels = channelsPerBuffer; } return audio; } void AEFreeAudioBufferList(AudioBufferList *bufferList ) { for ( int i=0; i<bufferList->mNumberBuffers; i++ ) { if ( bufferList->mBuffers[i].mData ) free(bufferList->mBuffers[i].mData); } free(bufferList); }