有可能使 SpeechRecognizer 更快吗?

It's possible to make SpeechRecognizer faster?

我正在开发一个使用 android SpeechRecognizer 的应用程序。我用它来做一些简单的事情。我单击一个按钮,我的 SpeechRecognizer 开始收听,我从我所说的内容中得到了一些结果。

简单吧?好吧,我的问题是我需要快速制作 SpeechRecognizer。我的意思是,我单击我的按钮,说“你好”,然后 SpeechRecognizer 在 return 一个包含可能结果的数组中花费大约 3-4 秒。我的问题是:

是否可以更快地生成 SpeechRecognizer return 结果? 或者花更少的时间关闭 Listening 意图并开始处理它所听的内容? 也许另一种方法来做到这一点?哪个性能会比这个更好?

我在检查库时看到了这 3 个参数:

EXTRA_SPEECH_INPUT_MINIMUM_LENGTH_MILLIS:

The amount of time that it should take after we stop hearing speech to consider the input complete.

EXTRA_SPEECH_INPUT_COMPLETE_SILENCE_LENGTH_MILLIS

The minimum length of an utterance.

EXTRA_SPEECH_INPUT_POSSIBLY_COMPLETE_SILENCE_LENGTH_MILLIS

The amount of time that it should take after we stop hearing speech to consider the input possibly complete.

http://developer.android.com/intl/es/reference/android/speech/RecognizerIntent.html

我都试过了,但还是不行,或者我没有正确使用它们。这是我的代码:

public class MainActivity extends Activity {
private static final String TIME_FORMAT = "%02d:%02d:%02d";
private final String TAG = "MainActivity";

private StartTimerButton mSpeakButton;
private CircleProgressBar mCountdownProgressBar;
private CountDownTimer mCountDownTimer;
private TextView mTimer;
private int mRunSeconds = 0;
private SpeechRecognizer mSpeechRecognizer;
private Intent mSpeechRecognizerIntent;
private boolean mIsListening = false;

@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
    mRunSeconds = 0;
    mTimer = (TextView) findViewById(R.id.timerText);
    mCountdownProgressBar = (CircleProgressBar) findViewById(R.id.progressBar);
    mSpeechRecognizer = SpeechRecognizer.createSpeechRecognizer(this);
    mSpeechRecognizerIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
    mSpeechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
            RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
    mSpeechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE,
            this.getPackageName());

//          mSpeechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_SPEECH_INPUT_MINIMUM_LENGTH_MILLIS,
//                1000);
//        mSpeechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_SPEECH_INPUT_COMPLETE_SILENCE_LENGTH_MILLIS,
//                1000);
//        mSpeechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_SPEECH_INPUT_POSSIBLY_COMPLETE_SILENCE_LENGTH_MILLIS,
//                1000);

    SpeechRecognitionListener listener = new SpeechRecognitionListener();
    mSpeechRecognizer.setRecognitionListener(listener);
    mSpeakButton = (StartTimerButton) findViewById(R.id.btnSpeak);
    mSpeakButton.setReadyState(false);
    mSpeakButton.setOnClickListener(new View.OnClickListener() {

        @Override
        public void onClick(View v) {
            if (mSpeakButton.isReady()) {
                if (!mIsListening)
                    mSpeechRecognizer.startListening(mSpeechRecognizerIntent);
            } else
                mSpeakButton.setReadyState(true);
        }
    });

}     

@Override
public boolean onCreateOptionsMenu(Menu menu) {
    // Inflate the menu; this adds items to the action bar if it is present.
    return true;
}

public void onSpeechResults(ArrayList<String> matches) {
    for (String match : matches) {

        match = match.toLowerCase();
        Log.d(TAG, "Got speech: " + match);

        if (match.contains("go")) {
            //Do Something
            mSpeechRecognizer.stopListening();
        }
        if (match.contains("stop")) {
            //Do Something
            mSpeechRecognizer.stopListening();
        }
    }
}

protected class SpeechRecognitionListener implements RecognitionListener
{

    @Override
    public void onBeginningOfSpeech()
    {
        //Log.d(TAG, "onBeginingOfSpeech");
    }

    @Override
    public void onBufferReceived(byte[] buffer)
    {

    }

    @Override
    public void onEndOfSpeech()
    {
        //Log.d(TAG, "onEndOfSpeech");
    }

    @Override
    public void onError(int error)
    {
        mSpeechRecognizer.startListening(mSpeechRecognizerIntent);

        //Log.d(TAG, "error = " + error);
    }

    @Override
    public void onEvent(int eventType, Bundle params)
    {

    }

    @Override
    public void onPartialResults(Bundle partialResults)
    {
        ArrayList<String> matches = partialResults.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
        for (String match : matches) {
            match = match.toLowerCase();
            Log.d(TAG, "onPartialResults : " + match);
        }
    }

    @Override
    public void onReadyForSpeech(Bundle params)
    {
        Log.d(TAG, "onReadyForSpeech"); //$NON-NLS-1$
    }

    @Override
    public void onResults(Bundle results)
    {
        //Log.d(TAG, "onResults"); //$NON-NLS-1$
        ArrayList<String> matches = results.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
        onSpeechResults(matches);
        // matches are the return values of speech recognition engine
        // Use these values for whatever you wish to do
    }

    @Override
    public void onRmsChanged(float rmsdB)
    {
    }
}}

是的,可以减少关机前的延迟....

您不能更改 Google 认为在用户讲话结束时为静音的时间量。 EXTRA_SPEECH_* 参数曾经有效,现在它们似乎充其量只是偶尔有效,或者根本无效。

您可以做的是,使用部分结果来检测您想要的单词或短语,然后手动关闭识别服务。

这是一个如何执行此操作的示例:

public boolean isHelloDetected(@NonNull final Context ctx, @NonNull final Locale loc, @NonNull final Bundle results) {

        boolean helloDetected = false;

        if (!results.isEmpty()) {

            final String hello = ctx.getString(R.string.hello);

            final ArrayList<String> partialData = results.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);

                /* handles empty string bug */
            if (partialData != null && !partialData.isEmpty()) {
                partialData.removeAll(Collections.singleton(""));

                if (!partialData.isEmpty()) {
                    final ListIterator<String> itr = partialData.listIterator();

                    String vd;
                    while (itr.hasNext()) {
                        vd = itr.next().toLowerCase(loc).trim();

                        if (vd.startsWith(hello)) {
                            helloDetected = true;
                            break;
                        }
                    }
                }
            }

            if (!helloDetected) {
                final ArrayList<String> unstableData = results.getStringArrayList("android.speech.extra.UNSTABLE_TEXT");

                    /* handles empty string bug */
                if (unstableData != null && !unstableData.isEmpty()) {
                    unstableData.removeAll(Collections.singleton(""));

                    if (!unstableData.isEmpty()) {
                        final ListIterator<String> itr = unstableData.listIterator();

                        String vd;
                        while (itr.hasNext()) {
                            vd = itr.next().toLowerCase(loc).trim();

                            if (vd.startsWith(hello)) {
                                helloDetected = true;
                                break;
                            }
                        }
                    }
                }
            }
        }

        return helloDetected;
    }

每次收到来自 onPartialResults()

的信息时,您都会 运行 使用此方法

如果返回 true,您需要在主线程上调用 stopListening()(可能由 new Handler(Looper.getMainLooper()).post(...

不过请注意,一旦您关闭了识别器,您在 onResults() 中收到的后续结果和最终结果可能 不会 包含 "hello"。因为那个词可能只被归类为不稳定的。

您需要编写额外的逻辑来防止在检测到 hello 后使用 detectHello()(否则您将重复调用 stopListening())- 一些简单的布尔标记可以解决此问题。

最后,使用 Collections.singleton("") 删除空字符串是内部错误报告的一部分,details to replicate here 并且使用 ListIterator 对您的示例来说可能有点矫枉过正;一个简单的 for 循环就足够了。

祝你好运。