如何向后播放audio?

有人build议从头到尾读取audio数据,并创build一个从头到尾写入的副本,然后简单地播放该反转的audio数据。

有没有iOS的现有例子如何做到这一点?

我find了一个名为MixerHost的示例项目,它在某个时候使用一个AudioUnitSampleType保存从文件读取的audio数据,并将其分配给一个缓冲区。

这被定义为:

 typedef SInt32 AudioUnitSampleType; #define kAudioUnitSampleFractionBits 24 

而根据苹果:

用于iPhone OS中的audio单元和其他audio处理的规范audio样本types是具有8.24位定点样本的非交织线性PCM。

换句话说,它保存了非交织的线性PCMaudio数据。

但是我不知道这些数据在哪里读取,以及它的存储位置。 这是加载audio数据并对其进行缓冲的代码:

 - (void) readAudioFilesIntoMemory { for (int audioFile = 0; audioFile < NUM_FILES; ++audioFile) { NSLog (@"readAudioFilesIntoMemory - file %i", audioFile); // Instantiate an extended audio file object. ExtAudioFileRef audioFileObject = 0; // Open an audio file and associate it with the extended audio file object. OSStatus result = ExtAudioFileOpenURL (sourceURLArray[audioFile], &audioFileObject); if (noErr != result || NULL == audioFileObject) {[self printErrorMessage: @"ExtAudioFileOpenURL" withStatus: result]; return;} // Get the audio file's length in frames. UInt64 totalFramesInFile = 0; UInt32 frameLengthPropertySize = sizeof (totalFramesInFile); result = ExtAudioFileGetProperty ( audioFileObject, kExtAudioFileProperty_FileLengthFrames, &frameLengthPropertySize, &totalFramesInFile ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (audio file length in frames)" withStatus: result]; return;} // Assign the frame count to the soundStructArray instance variable soundStructArray[audioFile].frameCount = totalFramesInFile; // Get the audio file's number of channels. AudioStreamBasicDescription fileAudioFormat = {0}; UInt32 formatPropertySize = sizeof (fileAudioFormat); result = ExtAudioFileGetProperty ( audioFileObject, kExtAudioFileProperty_FileDataFormat, &formatPropertySize, &fileAudioFormat ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (file audio format)" withStatus: result]; return;} UInt32 channelCount = fileAudioFormat.mChannelsPerFrame; // Allocate memory in the soundStructArray instance variable to hold the left channel, // or mono, audio data soundStructArray[audioFile].audioDataLeft = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); AudioStreamBasicDescription importFormat = {0}; if (2 == channelCount) { soundStructArray[audioFile].isStereo = YES; // Sound is stereo, so allocate memory in the soundStructArray instance variable to // hold the right channel audio data soundStructArray[audioFile].audioDataRight = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); importFormat = stereoStreamFormat; } else if (1 == channelCount) { soundStructArray[audioFile].isStereo = NO; importFormat = monoStreamFormat; } else { NSLog (@"*** WARNING: File format not supported - wrong number of channels"); ExtAudioFileDispose (audioFileObject); return; } // Assign the appropriate mixer input bus stream data format to the extended audio // file object. This is the format used for the audio data placed into the audio // buffer in the SoundStruct data structure, which is in turn used in the // inputRenderCallback callback function. result = ExtAudioFileSetProperty ( audioFileObject, kExtAudioFileProperty_ClientDataFormat, sizeof (importFormat), &importFormat ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileSetProperty (client data format)" withStatus: result]; return;} // Set up an AudioBufferList struct, which has two roles: // // 1. It gives the ExtAudioFileRead function the configuration it // needs to correctly provide the data to the buffer. // // 2. It points to the soundStructArray[audioFile].audioDataLeft buffer, so // that audio data obtained from disk using the ExtAudioFileRead function // goes to that buffer // Allocate memory for the buffer list struct according to the number of // channels it represents. AudioBufferList *bufferList; bufferList = (AudioBufferList *) malloc ( sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1) ); if (NULL == bufferList) {NSLog (@"*** malloc failure for allocating bufferList memory"); return;} // initialize the mNumberBuffers member bufferList->mNumberBuffers = channelCount; // initialize the mBuffers member to 0 AudioBuffer emptyBuffer = {0}; size_t arrayIndex; for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) { bufferList->mBuffers[arrayIndex] = emptyBuffer; } // set up the AudioBuffer structs in the buffer list bufferList->mBuffers[0].mNumberChannels = 1; bufferList->mBuffers[0].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType); bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft; if (2 == channelCount) { bufferList->mBuffers[1].mNumberChannels = 1; bufferList->mBuffers[1].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType); bufferList->mBuffers[1].mData = soundStructArray[audioFile].audioDataRight; } // Perform a synchronous, sequential read of the audio data out of the file and // into the soundStructArray[audioFile].audioDataLeft and (if stereo) .audioDataRight members. UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile; result = ExtAudioFileRead ( audioFileObject, &numberOfPacketsToRead, bufferList ); free (bufferList); if (noErr != result) { [self printErrorMessage: @"ExtAudioFileRead failure - " withStatus: result]; // If reading from the file failed, then free the memory for the sound buffer. free (soundStructArray[audioFile].audioDataLeft); soundStructArray[audioFile].audioDataLeft = 0; if (2 == channelCount) { free (soundStructArray[audioFile].audioDataRight); soundStructArray[audioFile].audioDataRight = 0; } ExtAudioFileDispose (audioFileObject); return; } NSLog (@"Finished reading file %i into memory", audioFile); // Set the sample index to zero, so that playback starts at the // beginning of the sound. soundStructArray[audioFile].sampleNumber = 0; // Dispose of the extended audio file object, which also // closes the associated file. ExtAudioFileDispose (audioFileObject); } } 

哪一部分包含必须颠倒的audio样本数组? 它是AudioUnitSampleType吗?

 bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft; 

注意:audioDataLeft被定义为一个AudioUnitSampleType ,它是一个SInt32而不是一个数组。

我在Core Audio Mailing列表中find了一条线索:

那么,就我所知,除了一些audioAPI已经被省略 – 我不是该程序的成员之一,与iPh * n *无关。 AFAIR,AudioFile.h和ExtendedAudioFile.h应该为您提供您需要读取或写入一个caf并访问它的stream/频道。 基本上,你想要向后读取每个通道/stream,所以,如果你不需要audio文件的属性,只要你有一个处理该通道的数据,假设它不是压缩格式,就非常简单。 考虑到caf可以代表的格式数量,这可能需要比您想象的更多的代码行。 一旦处理了未压缩的数据,它就应该像反转string一样简单。 那么你当然会用反向的数据来replace文件的数据,或者你可以只给audio输出(或者发送反向信号的地方),读取你有向后的任何stream。

这是我的尝试,但是当我将反转的缓冲区分配给两个通道的mData时,我什么也听不到:

 AudioUnitSampleType *leftData = soundStructArray[audioFile].audioDataLeft; AudioUnitSampleType *reversedData = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); UInt64 j = 0; for (UInt64 i = (totalFramesInFile - 1); i > -1; i--) { reversedData[j] = leftData[i]; j++; } 

通常,当使用ASBD时,这些字段描述缓冲区中由此描述表示的样本数据的完整布局 – 通常这些缓冲区由包含在AudioBufferList中的AudioBuffer表示。

但是,当ASBD具有kAudioFormatFlagIsNonInterleaved标志时,AudioBufferList具有不同的结构和语义。 在这种情况下,ASBD字段将描述包含在列表中的一个audio缓冲器的格式,并且列表中的每个audio缓冲器被确定为具有audio数据的单个(单声道)信道。 然后,ASBD的mChannelsPerFrame将指示AudioBufferList中包含的AudioBuffers的总数 – 每个缓冲区包含一个通道。 这主要用于此列表的AudioUnit(和AudioConverter)表示 – 在此结构的AudioHardware用法中不会find。

我曾经在一个示例应用程序,它logging了用户说,反向播放。 我已经使用CoreAudio来实现这一点。 链接到应用程序代码 。

/ *由于每个采样是16位(2字节)(单声道)。 您可以一次加载每个样本,方法是从录制结束开始,向后读取,将其复制到不同的缓冲区中。 当你到达数据的开始时,你已经转换了数据,播放将被颠倒。 * /

 // set up output file AudioFileID outputAudioFile; AudioStreamBasicDescription myPCMFormat; myPCMFormat.mSampleRate = 16000.00; myPCMFormat.mFormatID = kAudioFormatLinearPCM ; myPCMFormat.mFormatFlags = kAudioFormatFlagsCanonical; myPCMFormat.mChannelsPerFrame = 1; myPCMFormat.mFramesPerPacket = 1; myPCMFormat.mBitsPerChannel = 16; myPCMFormat.mBytesPerPacket = 2; myPCMFormat.mBytesPerFrame = 2; AudioFileCreateWithURL((__bridge CFURLRef)self.flippedAudioUrl, kAudioFileCAFType, &myPCMFormat, kAudioFileFlags_EraseFile, &outputAudioFile); // set up input file AudioFileID inputAudioFile; OSStatus theErr = noErr; UInt64 fileDataSize = 0; AudioStreamBasicDescription theFileFormat; UInt32 thePropertySize = sizeof(theFileFormat); theErr = AudioFileOpenURL((__bridge CFURLRef)self.recordedAudioUrl, kAudioFileReadPermission, 0, &inputAudioFile); thePropertySize = sizeof(fileDataSize); theErr = AudioFileGetProperty(inputAudioFile, kAudioFilePropertyAudioDataByteCount, &thePropertySize, &fileDataSize); UInt32 dataSize = fileDataSize; void* theData = malloc(dataSize); //Read data into buffer UInt32 readPoint = dataSize; UInt32 writePoint = 0; while( readPoint > 0 ) { UInt32 bytesToRead = 2; AudioFileReadBytes( inputAudioFile, false, readPoint, &bytesToRead, theData ); AudioFileWriteBytes( outputAudioFile, false, writePoint, &bytesToRead, theData ); writePoint += 2; readPoint -= 2; } free(theData); AudioFileClose(inputAudioFile); AudioFileClose(outputAudioFile); 

希望这可以帮助。

您不必分配一个单独的缓冲区来存储反转的数据,这可能需要一定的CPU,这取决于声音的长度。 要向后播放声音,只需使sampleNumber计数器从totalFramesInFile – 1开始。

您可以像这样修改MixerHost,以达到预期的效果。

replacesoundStructArray[audioFile].sampleNumber = 0;soundStructArray[audioFile].sampleNumber = totalFramesInFile - 1;

使sampleNumber SInt32而不是UInt32。

用这个replace你写出样本的循环。

 for (UInt32 frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) { outSamplesChannelLeft[frameNumber] = dataInLeft[sampleNumber]; if (isStereo) outSamplesChannelRight[frameNumber] = dataInRight[sampleNumber]; if (--sampleNumber < 0) sampleNumber = frameTotalForSound - 1; } 

这有效地使其反向播放。 嗯。 我听过MixerHost音乐已经有一段时间了。 我必须承认我觉得这很令人愉快。