Flutter 语音识别应用程序 - _platformCallHandler 调用 speech.onError 2

Flutter speech recognition app - _platformCallHandler call speech.onError 2

我是这个 flutter 的新手,我正在尝试创建语音到文本应用程序。我查看了文档、教程并对这个问题进行了一些研究,但我无法解决它。如果有人能帮我解决这个问题,那就太好了!..

下面是日志信息

C:\abc\app\speachtotext>flutter clean
Deleting build...                                                4,266ms (!)
Deleting .dart_tool...                                              36ms
Deleting Generated.xcconfig...                                       6ms
Deleting flutter_export_environment.sh...                           11ms

C:\abc\app\speachtotext>flutter run
Running "flutter pub get" in speachtotext...                        1.8s
Using hardware rendering with device AOSP on IA Emulator. If you notice graphics artifacts, consider enabling software
rendering with "--enable-software-rendering".
Launching lib\main.dart on AOSP on IA Emulator in debug mode...
Note: C:\Users\abc\AppData\Local\Pub\Cache\hosted\pub.dartlang.org\speech_recognition-0.3.0+1\android\src\main\java\bz\rxla\flutter\speechrecognition\SpeechRecognitionPlugin.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Running Gradle task 'assembleDebug'...
Running Gradle task 'assembleDebug'... Done                        67.4s
√ Built build\app\outputs\flutter-apk\app-debug.apk.
Installing build\app\outputs\flutter-apk\app.apk...                 2.7s
Waiting for AOSP on IA Emulator to report its views...              17ms
D/EGL_emulation( 8963): eglMakeCurrent: 0xdfa70ac0: ver 3 0 (tinfo 0xe1576e70)
D/eglCodecCommon( 8963): setVertexArrayObject: set vao to 0 (0) 1 0
I/flutter ( 8963): _MyAppState.activateSpeechRecognizer...
Syncing files to device AOSP on IA Emulator...                     681ms
D/SpeechRecognitionPlugin( 8963): Current Locale : en_US

Flutter run key commands.
r Hot reload.
R Hot restart.
h Repeat this help message.
d Detach (terminate "flutter run" but leave application running).
c Clear the screen
q Quit (terminate the application on the device).
An Observatory debugger and profiler on AOSP on IA Emulator is available at: http://127.0.0.1:64049/-_rQJ6XA0Ms=/
I/flutter ( 8963): _platformCallHandler call speech.onCurrentLocale en_US
I/flutter ( 8963): _MyAppState.onCurrentLocale... en_US
I/flutter ( 8963): _MyAppState.start => result true
D/SpeechRecognitionPlugin( 8963): onRmsChanged : -2.12
D/SpeechRecognitionPlugin( 8963): onRmsChanged : -2.12
D/SpeechRecognitionPlugin( 8963): onReadyForSpeech
I/flutter ( 8963): _platformCallHandler call speech.onSpeechAvailability true
D/SpeechRecognitionPlugin( 8963): onRmsChanged : -2.12
D/SpeechRecognitionPlugin( 8963): onRmsChanged : -2.0
D/SpeechRecognitionPlugin( 8963): onRmsChanged : -2.12
D/SpeechRecognitionPlugin( 8963): onError : 2
I/flutter ( 8963): _platformCallHandler call speech.onSpeechAvailability false
I/flutter ( 8963): _platformCallHandler call speech.onError 2
I/flutter ( 8963): Unknowm method speech.onError

我正在使用 flutter 团队为语音重组包提供的相同示例代码。

import 'package:flutter/material.dart';
import 'package:speech_recognition/speech_recognition.dart';

void main() {
  runApp(new MyApp());
}

const languages = const [
  const Language('Francais', 'fr_FR'),
  const Language('English', 'en_US'),
  const Language('Pусский', 'ru_RU'),
  const Language('Italiano', 'it_IT'),
  const Language('Español', 'es_ES'),
];

class Language {
  final String name;
  final String code;

  const Language(this.name, this.code);
}

class MyApp extends StatefulWidget {
  @override
  _MyAppState createState() => new _MyAppState();
}

class _MyAppState extends State<MyApp> {
  SpeechRecognition _speech;

  bool _speechRecognitionAvailable = false;
  bool _isListening = false;

  String transcription = '';

  //String _currentLocale = 'en_US';
  Language selectedLang = languages.first;

  @override
  initState() {
    super.initState();
    activateSpeechRecognizer();
  }

  // Platform messages are asynchronous, so we initialize in an async method.
  void activateSpeechRecognizer() {
    print('_MyAppState.activateSpeechRecognizer... ');
    _speech = new SpeechRecognition();
    _speech.setAvailabilityHandler(onSpeechAvailability);
    _speech.setCurrentLocaleHandler(onCurrentLocale);
    _speech.setRecognitionStartedHandler(onRecognitionStarted);
    _speech.setRecognitionResultHandler(onRecognitionResult);
    _speech.setRecognitionCompleteHandler(onRecognitionComplete);
    _speech
        .activate()
        .then((res) => setState(() => _speechRecognitionAvailable = res));
  }

  @override
  Widget build(BuildContext context) {
    return new MaterialApp(
      home: new Scaffold(
        appBar: new AppBar(
          title: new Text('SpeechRecognition'),
          actions: [
            new PopupMenuButton<Language>(
              onSelected: _selectLangHandler,
              itemBuilder: (BuildContext context) => _buildLanguagesWidgets,
            )
          ],
        ),
        body: new Padding(
            padding: new EdgeInsets.all(8.0),
            child: new Center(
              child: new Column(
                mainAxisSize: MainAxisSize.min,
                crossAxisAlignment: CrossAxisAlignment.stretch,
                children: [
                  new Expanded(
                      child: new Container(
                          padding: const EdgeInsets.all(8.0),
                          color: Colors.grey.shade200,
                          child: new Text(transcription))),
                  _buildButton(
                    onPressed: _speechRecognitionAvailable && !_isListening
                        ? () => start()
                        : null,
                    label: _isListening
                        ? 'Listening...'
                        : 'Listen (${selectedLang.code})',
                  ),
                  _buildButton(
                    onPressed: _isListening ? () => cancel() : null,
                    label: 'Cancel',
                  ),
                  _buildButton(
                    onPressed: _isListening ? () => stop() : null,
                    label: 'Stop',
                  ),
                ],
              ),
            )),
      ),
    );
  }

  List<CheckedPopupMenuItem<Language>> get _buildLanguagesWidgets => languages
      .map((l) => new CheckedPopupMenuItem<Language>(
            value: l,
            checked: selectedLang == l,
            child: new Text(l.name),
          ))
      .toList();

  void _selectLangHandler(Language lang) {
    setState(() => selectedLang = lang);
  }

  Widget _buildButton({String label, VoidCallback onPressed}) => new Padding(
      padding: new EdgeInsets.all(12.0),
      child: new RaisedButton(
        color: Colors.cyan.shade600,
        onPressed: onPressed,
        child: new Text(
          label,
          style: const TextStyle(color: Colors.white),
        ),
      ));

  void start() => _speech
      .listen(locale: selectedLang.code)
      .then((result) => print('_MyAppState.start => result ${result}'));

  void cancel() =>
      _speech.cancel().then((result) => setState(() => _isListening = result));

  void stop() =>
      _speech.stop().then((result) => setState(() => _isListening = result));

  void onSpeechAvailability(bool result) =>
      setState(() => _speechRecognitionAvailable = result);

  void onCurrentLocale(String locale) {
    print('_MyAppState.onCurrentLocale... $locale');
    setState(
        () => selectedLang = languages.firstWhere((l) => l.code == locale));
  }

  void onRecognitionStarted() => setState(() => _isListening = true);

  void onRecognitionResult(String text) => setState(() => transcription = text);

  void onRecognitionComplete() => setState(() => _isListening = false);
}

这是我的 mainfest 文件。它都是相同的默认设置,我只是在其上添加了权限。

<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="com.example.speachtotext">
    <!-- io.flutter.app.FlutterApplication is an android.app.Application that
         calls FlutterMain.startInitialization(this); in its onCreate method.
         In most cases you can leave this as-is, but you if you want to provide
         additional functionality it is fine to subclass or reimplement
         FlutterApplication and put your custom class here. -->

    <uses-permission android:name="android.permission.RECORD_AUDIO" />
    <application
        android:name="io.flutter.app.FlutterApplication"
        android:label="speachtotext"
        android:icon="@mipmap/ic_launcher">
        <activity
            android:name=".MainActivity"
            android:launchMode="singleTop"
            android:theme="@style/LaunchTheme"
            android:configChanges="orientation|keyboardHidden|keyboard|screenSize|smallestScreenSize|locale|layoutDirection|fontScale|screenLayout|density|uiMode"
            android:hardwareAccelerated="true"
            android:windowSoftInputMode="adjustResize">
            <!-- Specifies an Android theme to apply to this Activity as soon as
                 the Android process has started. This theme is visible to the user
                 while the Flutter UI initializes. After that, this theme continues
                 to determine the Window background behind the Flutter UI. -->
            <meta-data
              android:name="io.flutter.embedding.android.NormalTheme"
              android:resource="@style/NormalTheme"
              />
            <!-- Displays an Android View that continues showing the launch screen
                 Drawable until Flutter paints its first frame, then this splash
                 screen fades out. A splash screen is useful to avoid any visual
                 gap between the end of Android's launch screen and the painting of
                 Flutter's first frame. -->
            <meta-data
              android:name="io.flutter.embedding.android.SplashScreenDrawable"
              android:resource="@drawable/launch_background"
              />
            <intent-filter>
                <action android:name="android.intent.action.MAIN"/>
                <category android:name="android.intent.category.LAUNCHER"/>
            </intent-filter>
        </activity>
        <!-- Don't delete the meta-data below.
             This is used by the Flutter tool to generate GeneratedPluginRegistrant.java -->
        <meta-data
            android:name="flutterEmbedding"
            android:value="2" />
    </application>
</manifest>

这是pubspec.yml

name: speachtotext
description: speach to text app

version: 1.0.0+1

environment:
  sdk: ">=2.7.0 <3.0.0"

dependencies:
  flutter:
    sdk: flutter


  # The following adds the Cupertino Icons font to your application.
  # Use with the CupertinoIcons class for iOS style icons.
  cupertino_icons: ^0.1.3

dev_dependencies:
  flutter_test:
    sdk: flutter
  speech_recognition: ^0.3.0+1

# For information on the generic Dart part of this file, see the
# following page: https://dart.dev/tools/pub/pubspec

# The following section is specific to Flutter.
flutter:

  # The following line ensures that the Material Icons font is
  # included with your application, so that you can use the icons in
  # the material Icons class.
  uses-material-design: true

它只是语音重组应用程序的常规示例代码,因为我没有在其上添加任何内容。另外,我在模拟器中提供了 provide 必要的权限。当我单击麦克风图标时,我可以听到收听的声音,但它立即抛出此错误并且它没有收听或转录消息。

下面是flutter doctor -v的详细介绍。因为我有 visual studio ide 代码在我的机器中可用,所以它抛出插件观察。

C:\abc\app\speachtotext>flutter doctor -v
[√] Flutter (Channel master, 1.20.0-1.0.pre.207, on Microsoft Windows [Version 10.0.17763.1217], locale en-US)
    • Flutter version 1.20.0-1.0.pre.207 at C:\src\flutter
    • Framework revision 91bdf15858 (11 hours ago), 2020-06-24 23:38:01 -0400
    • Engine revision 0c14126211
    • Dart version 2.9.0 (build 2.9.0-18.0.dev d8eb844e5d)


[√] Android toolchain - develop for Android devices (Android SDK version 29.0.3)
    • Android SDK at C:\Users\af81193\AppData\Local\Android\Sdk
    • Platform android-29, build-tools 29.0.3
    • ANDROID_HOME = C:\Users\af81193\AppData\Local\Android\Sdk
    • ANDROID_SDK_ROOT = C:\Users\af81193\AppData\Local\Android\Sdk
    • Java binary at: C:\Android\jre\bin\java
    • Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b01)
    • All Android licenses accepted.

[√] Android Studio (version 4.0)
    • Android Studio at C:\Android
    • Flutter plugin version 46.0.2
    • Dart plugin version 193.7361
    • Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b01)

[!] VS Code, 64-bit edition (version 1.27.1)
    • VS Code at C:\Program Files\Microsoft VS Code
    X Flutter extension not installed; install from
      https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter

[√] Connected device (1 available)
    • AOSP on IA Emulator • emulator-5554 • android-x86 • Android 9 (API 28) (emulator)

! Doctor found issues in 1 category.

感谢您的回复!...谢谢!

根据Sagar的建议更新了使用speech_to_text插件后的错误信息。

I/flutter (20582): Received listener status: listening, listening: true  
I/flutter (20582): Received error status: SpeechRecognitionError 
msg: error_network, permanent: true, listening: true

也许插件有问题或者只是检查真实设备,我想推荐这个替代插件:https://pub.dev/packages/speech_to_text
由于您使用的插件已停止维护,因此可能存在一些问题。 您可以查看他们的上述插件的示例代码,效果很好。