如何知道AVSpeechUtterance何时完成,以便继续应用活动?

AVSpeechUtterance说话时,我想等到它完成之后才做其他事情。

有一个AVSpeechSynthesizer的属性,似乎表明语音何时发生:

isSpeaking

由于这个问题可能听起来很愚蠢和简单,我想知道如何使用/检查此属性等到语音结束后再继续?

或者:

有一个代表,我对如何使用也毫无头绪,它有能力在话语结束时做某事:

AVSpeechSynthesizerDelegate

这里有一个答案,说要使用它。 但这对我没有帮助,因为我不知道如何使用代表。

更新:

这就是我设置口语课的方式:

 import AVFoundation class CanSpeak: NSObject, AVSpeechSynthesizerDelegate { let voices = AVSpeechSynthesisVoice.speechVoices() let voiceSynth = AVSpeechSynthesizer() var voiceToUse: AVSpeechSynthesisVoice? override init(){ voiceToUse = AVSpeechSynthesisVoice.speechVoices().filter({ $0.name == "Karen" }).first } func sayThis(_ phrase: String){ let utterance = AVSpeechUtterance(string: phrase) utterance.voice = voiceToUse utterance.rate = 0.5 voiceSynth.speak(utterance) } } 

更新2:错误的解决方法……

使用上面提到的isSpeaking属性,在gameScene中:

 voice.sayThis(targetsToSay) let initialPause = SKAction.wait(forDuration: 1.0) let holdWhileSpeaking = SKAction.run { while self.voice.voiceSynth.isSpeaking {print("STILL SPEAKING!")} } let pauseAfterSpeaking = SKAction.wait(forDuration: 0.5) let doneSpeaking = SKAction.run {print("TIME TO GET ON WITH IT!!!")} run(SKAction.sequence( [ initialPause, holdWhileSpeaking, pauseAfterSpeaking, doneSpeaking ])) 

委托模式是面向对象编程中最常用的设计模式之一,它并不像看起来那么难。 对于您的情况,您可以简单地让您的类(游戏场景)成为CanSpeak类的委托。

 protocol CanSpeakDelegate { func speechDidFinish() } 

接下来将AVSpeechSynthesizerDelegate设置为CanSpeak类,声明CanSpeakDelegate然后使用AVSpeechSynthesizerDelegate委托函数。

 class CanSpeak: NSObject, AVSpeechSynthesizerDelegate { let voices = AVSpeechSynthesisVoice.speechVoices() let voiceSynth = AVSpeechSynthesizer() var voiceToUse: AVSpeechSynthesisVoice? var delegate: CanSpeakDelegate! override init(){ voiceToUse = AVSpeechSynthesisVoice.speechVoices().filter({ $0.name == "Karen" }).first self.voiceSynth.delegate = self } func sayThis(_ phrase: String){ let utterance = AVSpeechUtterance(string: phrase) utterance.voice = voiceToUse utterance.rate = 0.5 voiceSynth.speak(utterance) } func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) { self.delegate.speechDidFinish() } } 

最后在您的游戏场景类中,简单地符合CanSpeakDelegate并将其设置为CanSpeak类的委托。

 class GameScene: NSObject, CanSpeakDelegate { let canSpeak = CanSpeak() override init() { self.canSpeak.delegate = self } // This function will be called every time a speech finishes func speechDidFinish() { // Do something } } 

设置委托给AVSpeechSynthesizer实例。

voiceSynth.delegate = self

然后,实现didFinish方法如下:

 func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) { // Implements here. } 

使用方法尝试AVSpeechSynthesizerDelegate

 - (void)speechSynthesizer:(AVSpeechSynthesizer *)synthesizer didFinishSpeechUtterance:(AVSpeechUtterance *)utterance;