使用AVMutableAudioMix调整资产内曲目的音量

我将一个AVMutableAudioMix应用于我创build的资产,资产通常包含3-5个音轨(无video)。 我们的目标是在整个播放时间内添加几个音量命令,也就是说我想在1秒时将音量设置为0.1,在2秒时将音量设置为0.5,然后在3秒时将其设置为0.1。 我只是现在试图用AVPlayer做这个,但是以后也会在将AVSession导出到一个文件时使用它。 问题是,它似乎只关心第一个卷命令,似乎忽略了所有后来的卷命令。 如果第一个命令是将音量设置为0.1,则该资源的其余部分将成为此音轨的永久音量。 尽pipe看起来应该可以添加任意数量的这些命令,但是AVMutableAudioMix的“inputParameters”成员实际上是一个NSArray,它是一系列AVMutableAudioMixInputParameter的。 任何人都明白这一点?

编辑:我想这部分了。 我可以在某个曲目中添加几个音量变化。 但时机似乎没有办法,我不知道如何解决这个问题。 例如,在5秒的时间内将音量设置为0.0,然后在10秒钟的时间点为1.0,然后在15秒钟的时候回到0.0,这样您就可以假定音量会在这些时间点迅速地开始和closures,但结果总是非常不可预测的,的声音正在进行,有时还在工作(如setVolume所预料的,突然的音量变化)。 如果有人有AudioMix的工作,请提供一个例子。

我用来改变轨道音量的代码是:

AVURLAsset *soundTrackAsset = [[AVURLAsset alloc]initWithURL:trackUrl options:nil]; AVMutableAudioMixInputParameters *audioInputParams = [AVMutableAudioMixInputParameters audioMixInputParameters]; [audioInputParams setVolume:0.5 atTime:kCMTimeZero]; [audioInputParams setTrackID:[[[soundTrackAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] trackID]]; audioMix = [AVMutableAudioMix audioMix]; audioMix.inputParameters = [NSArray arrayWithObject:audioInputParams]; 

不要忘记将audiomix添加到您的AVAssetExportSession

 exportSession.audioMix = audioMix; 

但是,我注意到它不适用于所有格式,所以如果您遇到AVFoundation问题,可以使用此function更改存储文件的音量级别。 但是,这个function可能会很慢。

 -(void) ScaleAudioFileAmplitude:(NSURL *)theURL: (float) ampScale { OSStatus err = noErr; ExtAudioFileRef audiofile; ExtAudioFileOpenURL((CFURLRef)theURL, &audiofile); assert(audiofile); // get some info about the file's format. AudioStreamBasicDescription fileFormat; UInt32 size = sizeof(fileFormat); err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileDataFormat, &size, &fileFormat); // we'll need to know what type of file it is later when we write AudioFileID aFile; size = sizeof(aFile); err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_AudioFile, &size, &aFile); AudioFileTypeID fileType; size = sizeof(fileType); err = AudioFileGetProperty(aFile, kAudioFilePropertyFileFormat, &size, &fileType); // tell the ExtAudioFile API what format we want samples back in AudioStreamBasicDescription clientFormat; bzero(&clientFormat, sizeof(clientFormat)); clientFormat.mChannelsPerFrame = fileFormat.mChannelsPerFrame; clientFormat.mBytesPerFrame = 4; clientFormat.mBytesPerPacket = clientFormat.mBytesPerFrame; clientFormat.mFramesPerPacket = 1; clientFormat.mBitsPerChannel = 32; clientFormat.mFormatID = kAudioFormatLinearPCM; clientFormat.mSampleRate = fileFormat.mSampleRate; clientFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved; err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat); // find out how many frames we need to read SInt64 numFrames = 0; size = sizeof(numFrames); err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileLengthFrames, &size, &numFrames); // create the buffers for reading in data AudioBufferList *bufferList = malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (clientFormat.mChannelsPerFrame - 1)); bufferList->mNumberBuffers = clientFormat.mChannelsPerFrame; for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) { bufferList->mBuffers[ii].mDataByteSize = sizeof(float) * numFrames; bufferList->mBuffers[ii].mNumberChannels = 1; bufferList->mBuffers[ii].mData = malloc(bufferList->mBuffers[ii].mDataByteSize); } // read in the data UInt32 rFrames = (UInt32)numFrames; err = ExtAudioFileRead(audiofile, &rFrames, bufferList); // close the file err = ExtAudioFileDispose(audiofile); // process the audio for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) { float *fBuf = (float *)bufferList->mBuffers[ii].mData; for (int jj=0; jj < rFrames; ++jj) { *fBuf = *fBuf * ampScale; fBuf++; } } // open the file for writing err = ExtAudioFileCreateWithURL((CFURLRef)theURL, fileType, &fileFormat, NULL, kAudioFileFlags_EraseFile, &audiofile); // tell the ExtAudioFile API what format we'll be sending samples in err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat); // write the data err = ExtAudioFileWrite(audiofile, rFrames, bufferList); // close the file ExtAudioFileDispose(audiofile); // destroy the buffers for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) { free(bufferList->mBuffers[ii].mData); } free(bufferList); bufferList = NULL; } 

另请注意,您可能需要根据您的音量值来自哪里调整所需的ampScale。 系统音量从0到1,可以通过调用AudioSessionGetProperty获得

 Float32 volume; UInt32 dataSize = sizeof(Float32); AudioSessionGetProperty ( kAudioSessionProperty_CurrentHardwareOutputVolume, &dataSize, &volume ); 

由于API更改,audio扩展工具箱function不再适用。 现在需要你设置一个类别。 当设置导出属性时,我得到一个错误代码'?猫'(NSError将打印出十进制)。

这是现在在iOS 5.1中的代码。 这也是非常慢,只是看我会说慢几倍。 这也是内存密集型,因为它似乎将文件加载到内存,这会产生10MB的MP3文件的内存警告。

 -(void) scaleAudioFileAmplitude:(NSURL *)theURL withAmpScale:(float) ampScale { OSStatus err = noErr; ExtAudioFileRef audiofile; ExtAudioFileOpenURL((CFURLRef)theURL, &audiofile); assert(audiofile); // get some info about the file's format. AudioStreamBasicDescription fileFormat; UInt32 size = sizeof(fileFormat); err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileDataFormat, &size, &fileFormat); // we'll need to know what type of file it is later when we write AudioFileID aFile; size = sizeof(aFile); err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_AudioFile, &size, &aFile); AudioFileTypeID fileType; size = sizeof(fileType); err = AudioFileGetProperty(aFile, kAudioFilePropertyFileFormat, &size, &fileType); // tell the ExtAudioFile API what format we want samples back in AudioStreamBasicDescription clientFormat; bzero(&clientFormat, sizeof(clientFormat)); clientFormat.mChannelsPerFrame = fileFormat.mChannelsPerFrame; clientFormat.mBytesPerFrame = 4; clientFormat.mBytesPerPacket = clientFormat.mBytesPerFrame; clientFormat.mFramesPerPacket = 1; clientFormat.mBitsPerChannel = 32; clientFormat.mFormatID = kAudioFormatLinearPCM; clientFormat.mSampleRate = fileFormat.mSampleRate; clientFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved; err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat); // find out how many frames we need to read SInt64 numFrames = 0; size = sizeof(numFrames); err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileLengthFrames, &size, &numFrames); // create the buffers for reading in data AudioBufferList *bufferList = malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (clientFormat.mChannelsPerFrame - 1)); bufferList->mNumberBuffers = clientFormat.mChannelsPerFrame; //printf("bufferList->mNumberBuffers = %lu \n\n", bufferList->mNumberBuffers); for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) { bufferList->mBuffers[ii].mDataByteSize = sizeof(float) * numFrames; bufferList->mBuffers[ii].mNumberChannels = 1; bufferList->mBuffers[ii].mData = malloc(bufferList->mBuffers[ii].mDataByteSize); } // read in the data UInt32 rFrames = (UInt32)numFrames; err = ExtAudioFileRead(audiofile, &rFrames, bufferList); // close the file err = ExtAudioFileDispose(audiofile); // process the audio for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) { float *fBuf = (float *)bufferList->mBuffers[ii].mData; for (int jj=0; jj < rFrames; ++jj) { *fBuf = *fBuf * ampScale; fBuf++; } } // open the file for writing err = ExtAudioFileCreateWithURL((CFURLRef)theURL, fileType, &fileFormat, NULL, kAudioFileFlags_EraseFile, &audiofile); NSError *error = NULL; /*************************** You Need This Now ****************************/ AVAudioSession *session = [AVAudioSession sharedInstance]; [session setCategory:AVAudioSessionCategoryAudioProcessing error:&error]; /************************* End You Need This Now **************************/ // tell the ExtAudioFile API what format we'll be sending samples in err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat); error = [NSError errorWithDomain:NSOSStatusErrorDomain code:err userInfo:nil]; NSLog(@"Error: %@", [error description]); // write the data err = ExtAudioFileWrite(audiofile, rFrames, bufferList); // close the file ExtAudioFileDispose(audiofile); // destroy the buffers for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) { free(bufferList->mBuffers[ii].mData); } free(bufferList); bufferList = NULL; } 

感谢您在这篇文章中提供的帮助。 我只是想添加一件事,因为你应该将AVAudioSession恢复到原来的状态,否则你最终不会玩任何东西。

 AVAudioSession *session = [AVAudioSession sharedInstance]; NSString *originalSessionCategory = [session category]; [session setCategory:AVAudioSessionCategoryAudioProcessing error:&error]; ... ... // restore category [session setCategory:originalSessionCategory error:&error]; if(error) NSLog(@"%@",[error localizedDescription]); 

干杯

对于设置可变轨道的不同卷,你可以使用下面的代码

 self.audioMix = [AVMutableAudioMix audioMix]; AVMutableAudioMixInputParameters *audioInputParams = [AVMutableAudioMixInputParameters audioMixInputParameters]; [audioInputParams setVolume:0.1 atTime:kCMTimeZero]; [audioInputParams setVolume:0.1 atTime:kCMTimeZero]; audioInputParams.trackID = compositionAudioTrack2.trackID; AVMutableAudioMixInputParameters *audioInputParams1 = [AVMutableAudioMixInputParameters audioMixInputParameters]; [audioInputParams1 setVolume:0.9 atTime:kCMTimeZero]; audioInputParams1.trackID = compositionAudioTrack1.trackID; AVMutableAudioMixInputParameters *audioInputParams2 = [AVMutableAudioMixInputParameters audioMixInputParameters]; [audioInputParams2 setVolume:0.3 atTime:kCMTimeZero]; audioInputParams2.trackID = compositionAudioTrack.trackID; self.audioMix.inputParameters =[NSArray arrayWithObjects:audioInputParams,audioInputParams1,audioInputParams2, nil];