反转音频文件Swift / Objective-C

有没有办法可以反转并导出.m4a音频文件? 我在这里找到了一个反转音轨的解决方案,但它似乎只是在处理.caf文件格式。 如果唯一的方法是使用.caf,有没有办法先将.m4a文件转换为.caf?

更新:在另一篇文章中,我发现AVAssetReader可用于从音频文件中读取音频样本,但我不知道如何以相反的顺序写回样本。 以下代码片段直接来自post。 任何帮助,将不胜感激。 谢谢

+ (void) reverseAudioTrack: (AVAsset *)audioAsset outputURL: (NSURL *)outputURL { NSError *error; AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:audioAsset error:&error]; if (error) {NSLog(@"%@", error.localizedDescription);} AVAssetTrack* track = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]; NSMutableDictionary* audioReadSettings = [NSMutableDictionary dictionary]; [audioReadSettings setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM] forKey:AVFormatIDKey]; AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track outputSettings:audioReadSettings]; [reader addOutput:readerOutput]; [reader startReading]; CMSampleBufferRef sample; //= [readerOutput copyNextSampleBuffer]; NSMutableArray *samples = [[NSMutableArray alloc] init]; // Get all samples while((sample = [readerOutput copyNextSampleBuffer])) { [samples addObject:(__bridge id)sample]; CFRelease(sample); } // Process samples in reverse AudioChannelLayout acl; bzero(&acl, sizeof(acl)); acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo; AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:outputURL fileType:AVFileTypeAppleM4A error:&error]; if (error) {NSLog(@"%@", error.localizedDescription);} NSDictionary *writerOutputSettings = [ NSDictionary dictionaryWithObjectsAndKeys: [ NSNumber numberWithInt: kAudioFormatAppleLossless ], AVFormatIDKey, [ NSNumber numberWithInt: 16 ], AVEncoderBitDepthHintKey, [ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey, [ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey, [ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey, nil ]; AVAssetWriterInput *audioWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:writerOutputSettings]; [writer addInput:audioWriterInput]; [writer startWriting]; [writer startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp((__bridge CMSampleBufferRef)samples[0]) ]; // (1) Would it work if I loop in reverse here? for (NSInteger i = 0; i < samples.count; i++) { CMBlockBufferRef buffer = CMSampleBufferGetDataBuffer((__bridge CMSampleBufferRef)samples[i]); CMItemCount numSamplesInBuffer = CMSampleBufferGetNumSamples((__bridge CMSampleBufferRef)samples[i]); AudioBufferList audioBufferList; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer((__bridge CMSampleBufferRef)samples[i], NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &buffer ); for (int bufferCount = 0; bufferCount < audioBufferList.mNumberBuffers; bufferCount++) { SInt16* samples = (SInt16 *)audioBufferList.mBuffers[bufferCount].mData; for (int i=0; i < numSamplesInBuffer; i++) { // amplitude for the sample is samples[i], assuming you have linear pcm to start with // (2) What should I be doing to write the samples into an audio file? } } CFRelease(buffer); } 

是的,有一种方法可以处理,然后导出任何有iOS支持的音频文件。

但是,大多数这些格式( mp3命名为1)都是有损和压缩的。 您必须首先解压缩数据,应用转换并重新压缩。 您将应用于音频信息的大多数转换应该可以在原始PCM级别完成。

结合这两个语句,您可以在几个过程中执行此操作:

  1. 将原始文件转换为符合kAudioFormatLinearPCM音频文件,如AIFF
  2. 处理该临时文件(反转其内容)
  3. 将临时文件转换回原始格式

就像你将一个转换应用于压缩的jpeg图像一样,这个过程也会有所退化。 最后的音频最多只会遭受一次压缩循环。

因此,这种方法的真正数学答案实际上是否定的。


仅供参考,这里是swift 3中的一些入门代码。需要进一步细化才能跳过文件头。

 var outAudioFile:AudioFileID? var pcm = AudioStreamBasicDescription(mSampleRate: 44100.0, mFormatID: kAudioFormatLinearPCM, mFormatFlags: kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger, mBytesPerPacket: 2, mFramesPerPacket: 1, mBytesPerFrame: 2, mChannelsPerFrame: 1, mBitsPerChannel: 16, mReserved: 0) var theErr = AudioFileCreateWithURL(destUrl as CFURL!, kAudioFileAIFFType, &pcm, .eraseFile, &outAudioFile) if noErr == theErr, let outAudioFile = outAudioFile { var inAudioFile:AudioFileID? theErr = AudioFileOpenURL(sourceUrl as! CFURL, .readPermission, 0, &inAudioFile) if noErr == theErr, let inAudioFile = inAudioFile { var fileDataSize:UInt64 = 0 var thePropertySize:UInt32 = UInt32(MemoryLayout.stride) theErr = AudioFileGetProperty(inAudioFile, kAudioFilePropertyAudioDataByteCount, &thePropertySize, &fileDataSize) if( noErr == theErr) { let dataSize:Int64 = Int64(fileDataSize) let theData = UnsafeMutableRawPointer.allocate(bytes: Int(dataSize), alignedTo: MemoryLayout.alignment) var readPoint:Int64 = Int64(dataSize) var writePoint:Int64 = 0 while( readPoint > 0 ) { var bytesToRead = UInt32(2) AudioFileReadBytes( inAudioFile, false, readPoint, &bytesToRead, theData) AudioFileWriteBytes( outAudioFile, false, writePoint, &bytesToRead, theData) writePoint += 2 readPoint -= 2 } theData.deallocate(bytes: Int(dataSize), alignedTo: MemoryLayout.alignment) AudioFileClose(inAudioFile); AudioFileClose(outAudioFile); } } }