Swift SFSpeechRecognizer 附加现有的 UITextView 内容

Swift SFSpeechRecognizer appending existing UITextView content

我在我的应用程序中使用 SFSpeechRecognizer,由于有一个专用按钮(开始语音识别),它可以很好地简化最终用户在 UITextView 中输入评论的过程。

但是如果用户首先手动输入一些文本然后启动其语音识别,则先前手动输入的文本将被删除。如果用户在同一 UITextView 上执行两次语音识别(用户正在 "speech" 录制其文本的第一部分,然后停止录制,最后重新开始录制),也会出现这种情况,之前的文本将被删除.

因此,我想知道如何将 SFSpeechRecognizer 识别的文本附加到现有文本。

这是我的代码:

func recordAndRecognizeSpeech(){

    if recognitionTask != nil {
        recognitionTask?.cancel()
        recognitionTask = nil
    }
    let audioSession = AVAudioSession.sharedInstance()
    do {
        try audioSession.setCategory(AVAudioSessionCategoryRecord)
        try audioSession.setMode(AVAudioSessionModeMeasurement)
        try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
    } catch {
        print("audioSession properties weren't set because of an error.")
    }
    self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
    guard let inputNode = audioEngine.inputNode else {
        fatalError("Audio engine has no input node")
    }
    let recognitionRequest = self.recognitionRequest
    recognitionRequest.shouldReportPartialResults = true

    recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
        var isFinal = false
        self.decaration.text = (result?.bestTranscription.formattedString)!

        isFinal = (result?.isFinal)!
        let bottom = NSMakeRange(self.decaration.text.characters.count - 1, 1)
        self.decaration.scrollRangeToVisible(bottom)

        if error != nil || isFinal {
            self.audioEngine.stop()
            inputNode.removeTap(onBus: 0)
            self.recognitionTask = nil
            self.recognitionRequest.endAudio()
            self.oBtSpeech.isEnabled = true
        }
    })
    let recordingFormat = inputNode.outputFormat(forBus: 0)
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
        self.recognitionRequest.append(buffer)
    }
    audioEngine.prepare()

    do {
        try audioEngine.start()
    } catch {
        print("audioEngine couldn't start because of an error.")
    }

}

我尝试更新

self.decaration.text = (result?.bestTranscription.formattedString)!

来自

self.decaration.text += (result?.bestTranscription.formattedString)!

但它会为每个识别的句子赚取达布隆。

知道我该怎么做吗?

尝试在启动识别系统之前保存文本。

func recordAndRecognizeSpeech(){
    // one change here
    let defaultText = self.decaration.text

    if recognitionTask != nil {
        recognitionTask?.cancel()
        recognitionTask = nil
    }
    let audioSession = AVAudioSession.sharedInstance()
    do {
        try audioSession.setCategory(AVAudioSessionCategoryRecord)
        try audioSession.setMode(AVAudioSessionModeMeasurement)
        try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
    } catch {
        print("audioSession properties weren't set because of an error.")
    }
    self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
    guard let inputNode = audioEngine.inputNode else {
        fatalError("Audio engine has no input node")
    }
    let recognitionRequest = self.recognitionRequest
    recognitionRequest.shouldReportPartialResults = true

    recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
        var isFinal = false
        // one change here
        self.decaration.text = defaultText + " " + (result?.bestTranscription.formattedString)!

        isFinal = (result?.isFinal)!
        let bottom = NSMakeRange(self.decaration.text.characters.count - 1, 1)
        self.decaration.scrollRangeToVisible(bottom)

        if error != nil || isFinal {
            self.audioEngine.stop()
            inputNode.removeTap(onBus: 0)
            self.recognitionTask = nil
            self.recognitionRequest.endAudio()
            self.oBtSpeech.isEnabled = true
        }
    })
    let recordingFormat = inputNode.outputFormat(forBus: 0)
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
        self.recognitionRequest.append(buffer)
    }
    audioEngine.prepare()

    do {
        try audioEngine.start()
    } catch {
        print("audioEngine couldn't start because of an error.")
    }
}

result?.bestTranscription.formattedString returns 识别的整个短语,这就是为什么每次收到 SFSpeechRecognnizer 的回复时都应该重置 self.decaration.text