如何将audio效果应用于文件并写入文件系统 – iOS

我正在构build一个应用程序,允许用户将audiofilter应用于录制的audio,如混响,增强。

我无法find有关如何将filter应用于文件本身的任何可行的信息源,因为需要稍后将处理的文件上载到服务器。

我目前正在使用AudioKit进行可视化,而且我知道它可以进行audio处理,但只能用于播放。 请提供进一步研究的build议。

AudioKit有一个不需要iOS 11的离线渲染节点。下面是一个示例,player.schedule(…)和player.start(at。)位是必需的,因为AKAudioPlayer的底层AVAudioPlayerNode将在调用线程上阻塞,等待如果你用player.play()启动它, player.play()下一个渲染。

 import UIKit import AudioKit class ViewController: UIViewController { var player: AKAudioPlayer? var reverb = AKReverb() var boost = AKBooster() var offlineRender = AKOfflineRenderNode() override func viewDidLoad() { super.viewDidLoad() guard let url = Bundle.main.url(forResource: "theFunkiestFunkingFunk", withExtension: "mp3") else { return } var audioFile: AKAudioFile? do { audioFile = try AKAudioFile.init(forReading: url) player = try AKAudioPlayer.init(file: audioFile!) } catch { print(error) return } guard let player = player else { return } player >>> reverb >>> boost >>> offlineRender AudioKit.output = offlineRender AudioKit.start() let docs = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! let dstURL = docs.appendingPathComponent("rendered.caf") offlineRender.internalRenderEnabled = false player.schedule(from: 0, to: player.duration, avTime: nil) let sampleTimeZero = AVAudioTime(sampleTime: 0, atRate: AudioKit.format.sampleRate) player.play(at: sampleTimeZero) do { try offlineRender.renderToURL(dstURL, seconds: player.duration) } catch { print(error) return } offlineRender.internalRenderEnabled = true print("Done! Rendered to " + dstURL.path) } } 

您可以使用Audio Unit插件中新引入的“手动渲染”function(请参阅下面的示例)。

如果您需要支持较老的macOS / iOS版本,如果您无法实现与AudioKit相同的function (即使我自己没有尝试过),也会感到惊讶。 例如,使用AKSamplePlayer作为您的第一个节点(它将读取您的audio文件),然后build立并连接您的效果,并使用AKNodeRecorder作为您的最后一个节点。

使用新的audio单元function进行手动渲染的示例

 import AVFoundation //: ## Source File //: Open the audio file to process let sourceFile: AVAudioFile let format: AVAudioFormat do { let sourceFileURL = Bundle.main.url(forResource: "mixLoop", withExtension: "caf")! sourceFile = try AVAudioFile(forReading: sourceFileURL) format = sourceFile.processingFormat } catch { fatalError("could not open source audio file, \(error)") } //: ## Engine Setup //: player -> reverb -> mainMixer -> output //: ### Create and configure the engine and its nodes let engine = AVAudioEngine() let player = AVAudioPlayerNode() let reverb = AVAudioUnitReverb() engine.attach(player) engine.attach(reverb) // set desired reverb parameters reverb.loadFactoryPreset(.mediumHall) reverb.wetDryMix = 50 // make connections engine.connect(player, to: reverb, format: format) engine.connect(reverb, to: engine.mainMixerNode, format: format) // schedule source file player.scheduleFile(sourceFile, at: nil) //: ### Enable offline manual rendering mode do { let maxNumberOfFrames: AVAudioFrameCount = 4096 // maximum number of frames the engine will be asked to render in any single render call try engine.enableManualRenderingMode(.offline, format: format, maximumFrameCount: maxNumberOfFrames) } catch { fatalError("could not enable manual rendering mode, \(error)") } //: ### Start the engine and player do { try engine.start() player.play() } catch { fatalError("could not start engine, \(error)") } //: ## Offline Render //: ### Create an output buffer and an output file //: Output buffer format must be same as engine's manual rendering output format let outputFile: AVAudioFile do { let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0] let outputURL = URL(fileURLWithPath: documentsPath + "/mixLoopProcessed.caf") outputFile = try AVAudioFile(forWriting: outputURL, settings: sourceFile.fileFormat.settings) } catch { fatalError("could not open output audio file, \(error)") } // buffer to which the engine will render the processed data let buffer: AVAudioPCMBuffer = AVAudioPCMBuffer(pcmFormat: engine.manualRenderingFormat, frameCapacity: engine.manualRenderingMaximumFrameCount)! //: ### Render loop //: Pull the engine for desired number of frames, write the output to the destination file while engine.manualRenderingSampleTime < sourceFile.length { do { let framesToRender = min(buffer.frameCapacity, AVAudioFrameCount(sourceFile.length - engine.manualRenderingSampleTime)) let status = try engine.renderOffline(framesToRender, to: buffer) switch status { case .success: // data rendered successfully try outputFile.write(from: buffer) case .insufficientDataFromInputNode: // applicable only if using the input node as one of the sources break case .cannotDoInCurrentContext: // engine could not render in the current render call, retry in next iteration break case .error: // error occurred while rendering fatalError("render failed") } } catch { fatalError("render failed, \(error)") } } player.stop() engine.stop() print("Output \(outputFile.url)") print("AVAudioEngine offline rendering completed") 

你可以在这里find更多关于AudioUnit格式更新的文档和例子。