如何知道 AVSpeechUtterance 何时结束,以便继续应用 activity?

How to know when an AVSpeechUtterance has finished, so as to continue app activity?

AVSpeechUtterance 正在讲话时,我想等它结束后再做其他事情。

AVSpeechSynthesizer 中有一个 属性,这似乎表明语音发生的时间:

isSpeaking

尽管这个问题听起来既愚蠢又简单,但我想知道如何use/check这个属性等到演讲结束后再继续?

或者:

有一个代表,我也不知道如何使用,它有能力在话语结束时做某事:

AVSpeechSynthesizerDelegate

有一个答案,,说要使用它。但这对我没有帮助,因为我不知道如何使用委托。

更新:

这就是我设置口语的方式class:

import AVFoundation

class CanSpeak: NSObject, AVSpeechSynthesizerDelegate {

    let voices = AVSpeechSynthesisVoice.speechVoices()
    let voiceSynth = AVSpeechSynthesizer()
    var voiceToUse: AVSpeechSynthesisVoice?

    override init(){
        voiceToUse = AVSpeechSynthesisVoice.speechVoices().filter({ [=11=].name == "Karen" }).first
    }

    func sayThis(_ phrase: String){
        let utterance = AVSpeechUtterance(string: phrase)
        utterance.voice = voiceToUse
        utterance.rate = 0.5
        voiceSynth.speak(utterance)
    }
}

更新 2:错误的解决方法...

使用上面提到的isSpeaking属性,在gameScene中:

voice.sayThis(targetsToSay)

    let initialPause = SKAction.wait(forDuration: 1.0)
    let holdWhileSpeaking = SKAction.run {
        while self.voice.voiceSynth.isSpeaking {print("STILL SPEAKING!")}
    }
    let pauseAfterSpeaking = SKAction.wait(forDuration: 0.5)
    let doneSpeaking = SKAction.run {print("TIME TO GET ON WITH IT!!!")}

run(SKAction.sequence(
    [   initialPause,
        holdWhileSpeaking,
        pauseAfterSpeaking,
        doneSpeaking
    ]))

尝试 AVSpeechSynthesizerDelegate 方法:

- (void)speechSynthesizer:(AVSpeechSynthesizer *)synthesizer didFinishSpeechUtterance:(AVSpeechUtterance *)utterance;

为您的 AVSpeechSynthesizer 实例设置委托。

voiceSynth.delegate = self

然后,实现didFinish方法如下:

func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, 
                  didFinish utterance: AVSpeechUtterance) {
    // Implements here.
}

委托模式是面向对象编程中最常用的设计模式之一,它并不像看起来那么难。对于您的情况,您可以简单地让您的 class(游戏场景)成为 CanSpeak class 的代表。

protocol CanSpeakDelegate {
   func speechDidFinish()
}

接下来将 AVSpeechSynthesizerDelegate 设置为您的 CanSpeak class,声明 CanSpeakDelegate,然后使用 AVSpeechSynthesizerDelegate 委托函数。

class CanSpeak: NSObject, AVSpeechSynthesizerDelegate {

   let voices = AVSpeechSynthesisVoice.speechVoices()
   let voiceSynth = AVSpeechSynthesizer()
   var voiceToUse: AVSpeechSynthesisVoice?

   var delegate: CanSpeakDelegate!

   override init(){
      voiceToUse = AVSpeechSynthesisVoice.speechVoices().filter({ [=11=].name == "Karen" }).first
      self.voiceSynth.delegate = self
   }

   func sayThis(_ phrase: String){
      let utterance = AVSpeechUtterance(string: phrase)
      utterance.voice = voiceToUse
      utterance.rate = 0.5
      voiceSynth.speak(utterance)
   }

   func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
      self.delegate.speechDidFinish()
   }
}

最后在你的游戏场景class中,简单地符合CanSpeakDelegate并将其设置为你的CanSpeak的代理class。

class GameScene: NSObject, CanSpeakDelegate {

   let canSpeak = CanSpeak()

   override init() {
      self.canSpeak.delegate = self
   }

   // This function will be called every time a speech finishes
   func speechDidFinish() {
      // Do something
   }
}