将解析的意图和槽从 Amazon-Lex 发送回客户端

Send the parsed intent and slots back to the client from Amazon-Lex

Amazon Lex FAQ提到我们可以将解析后的intent和slot发回给客户端,这样我们就可以将业务逻辑放在客户端中。但是我无法在 Lex 文档中找到任何明确的内容。

我的用例: 将 text/voice 数据发送到 Amazon lex,然后 lex 解析意图和槽并将带有意图、槽和上下文数据的 JSON 发回请求它的客户端,而不是将其发送到 Lambda 或其他后端API 端点。

谁能指出正确的way/configuration?

此致

如果我的理解正确,您希望您的客户端接收 LexResponse 并在客户端内处理它,而不是通过 Lambda 或后端 API。如果这是正确的,您可以尝试 Lex-Audio 的以下实现。

// This will handle the event when the mic button is clicked on your UI.
scope.audioClick = function () {

        // Cognito Credentials for Lex Runtime Service
        AWS.config.credentials = new AWS.CognitoIdentityCredentials(
            { IdentityPoolId: Settings.AWSIdentityPool }, 
            { region: Settings.AWSRegion }
        );

        AWS.config.region = Settings.AWSRegion;

        config = {
            lexConfig: { botName: Settings.BotName }
        };
        conversation = new LexAudio.conversation(config, function (state) {

            scope.$apply(function () {
                if (state === "Passive") {
                    scope.placeholder = Settings.PlaceholderWithMic;
                }
                else {
                    scope.placeholder = state + "...";
                }
            });

        }, chatbotSuccess
            , function (error) {
               audTextContent = error;
            }, function (timeDomain, bufferLength) {
            });
        conversation.advanceConversation();
    };

Lex响应后调用的成功函数如下:

chatbotSuccess = function (data) { 
       var intent = data.intent;
       var slots = data.slots;

       // Do what you need with this data
    };

希望这能让您了解需要做什么。如果您需要 Lex-Audio 的参考资料,亚马逊博客上有一篇很棒的 post,您应该去看看: https://aws.amazon.com/blogs/machine-learning/capturing-voice-input-in-a-browser/