speechSynthesis.getVoices() 在 Chromium Fedora 中是空数组

speechSynthesis.getVoices() is empty array in Chromium Fedora

Chromium 支持语音合成 API 吗?我需要安装语音吗?如果是这样,我该怎么做?我正在使用 Fedora。声音像视频一样需要安装额外的包才能工作吗?

我试过这个代码:

var msg = new SpeechSynthesisUtterance('I see dead people!');
msg.voice = speechSynthesis.getVoices().filter(function(voice) {
    return voice.name == 'Whisper';
})[0];
speechSynthesis.speak(msg);

来自文章 Web apps that talk - Introduction to the Speech Synthesis API

但是函数 speechSynthesis.getVoices() return 空数组。

我也试过:

window.speechSynthesis.onvoiceschanged = function() {
    console.log(window.speechSynthesis.getVoices())
};

函数已执行,但数组也为空。

https://fedoraproject.org/wiki/Chromium 上有使用 --enable-speech-dispatcher 标志的信息,但是当我使用它时,我收到警告说标志不受支持。

Is Speech Synthesis API supported by Chromium?

是的,Web Speech API has basic support at Chromium browser, though there are several issues with both Chromium and Firefox implementation of the specification, see see Blink>Speech, Internals>SpeechSynthesis, Web Speech

Do I need to install voices? If so how can I do that? I'm using Fedora. Is voices like video that I need to install extra package for it to work?

是的,需要安装语音。默认情况下,Chromium 未附带设置 SpeechSynthesisUtterance voice 属性的声音,请参阅 ; How to capture generated audio from window.speechSynthesis.speak() call?.

您可以安装speech-dispatcher as a server for the system speech synthesis server and espeak作为语音合成器。

$ yum install speech-dispatcher espeak

您还可以在用户主文件夹中为 speech-dispatcher 设置配置文件,为 speech-dispatcher 和您使用的输出模块设置特定选项,例如 espeak

$ spd-conf -u

使用 --enable-speech-dispatcher 标志启动 Chromium 会自动生成到 speech-dispatcher 的连接,您可以在其中设置 05 之间的 LogLevel 以查看 SSIP Chromium 代码与 speech-dispatcher.

之间的通信

.getVoices() returns结果异步,需要调用两次

在 GitHub Speech Synthesis: No Voices #586 查看此 electron 问题。

window.speechSynthesis.onvoiceschanged = e => {
  const voices = window.speechSynthesis.getVoices();
  // do speech synthesis stuff
  console.log(voices);
}
window.speechSynthesis.getVoices();

或作为异步函数组成,其中 returns 一个 Promise 的值为语音数组

(async() => {

  const getVoices = (voiceName = "") => {
    return new Promise(resolve => {
      window.speechSynthesis.onvoiceschanged = e => {
        // optionally filter returned voice by `voiceName`
        // resolve(
        //  window.speechSynthesis.getVoices()
        //  .filter(({name}) => /^en.+whisper/.test(name))
        // );
        resolve(window.speechSynthesis.getVoices());
      }
      window.speechSynthesis.getVoices();
    })
  }

  const voices = await getVoices();
  console.log(voices);

})();