AVspeechSynthesizer iOS 文本语音

AVspeechSynthesizer iOS text speech

我是 swift 和 iOS 应用程序开发的新手。我正在使用 AVSpeechSynthesiser 创建文本到语音应用程序。我想设置一个用英语说的字符串,但我希望它将那个特定的字符串翻译成语音,但使用不同的语言,例如阿拉伯语。我可以使用 AVSpeechSynthesizer 做到这一点吗,或者我需要使用翻译 API 来做到这一点。

谢谢

我拼凑了一个 AVSpeechSynthesizer class 来处理从一种语言到另一种语言的切换。这里有一个 AVSpeechSynthesizer tutorial on NSHipster 是学习这个的一个很好的起点。我没有弄乱翻译,但你可以弄清楚那部分……我还创建了一个基本的翻译器 class,它将把 "hello" 翻译成“مرحبا”。你可以在这里看到这个项目:

TranslateDemo

要使用翻译器,您可能希望像这样将一个操作绑定到一个按钮:

@IBAction func translateToArabicAction(_ sender: UIButton) {
    // check that there are characters entered in the textField
    if (textToTranslateTextField.text?.characters.count)! > 0 {
        let translatedText = translator.translate(word: (textToTranslateTextField.text?.lowercased())!)
        speechSynthesizer.speak(translatedText, in: Language.arabic.rawValue)
    }
}

@IBAction func translateToEnglishAction(_ sender: UIButton) {
    // check that there are characters entered in the textField
    if (textToTranslateTextField.text?.characters.count)! > 0 {
        let translatedText = translator.translate(word: (textToTranslateTextField.text?.lowercased())!)
        speechSynthesizer.speak(translatedText, in: Language.english.rawValue)
    }
}

语音合成器如下所示:

import AVFoundation

// You can use an enum so you don't have to manually type out character strings. Look them up once and stick them in an enum. From there, you set the language with your enum rather than typing out the string.
enum Language: String {
    case english = "en-US"
    case arabic = "ar-SA"
}

class Speaker: NSObject {

    let synth = AVSpeechSynthesizer()

    override init() {
        super.init()
        synth.delegate = self
    }

    func speak(_ announcement: String, in language: String) {
        print("speak announcement in language \(language) called")
        prepareAudioSession()
        let utterance = AVSpeechUtterance(string: announcement.lowercased())
        utterance.voice = AVSpeechSynthesisVoice(language: language)
        synth.speak(utterance)
    }

    private func prepareAudioSession() {
        do {
            try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryAmbient, with: .mixWithOthers)
        } catch {
            print(error)
        }

        do {
            try AVAudioSession.sharedInstance().setActive(true)
        } catch {
            print(error)
        }
    }

    func stop() {
        if synth.isSpeaking {
            synth.stopSpeaking(at: .immediate)
        }
    }
}

extension Speaker: AVSpeechSynthesizerDelegate {
    func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didStart utterance: AVSpeechUtterance) {
        print("Speaker class started")
    }

    func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
        print("Speaker class finished")
    }
}