除了从用户输入派生的 ITurnContext RecognizeAsync 之外,为 LuisRecognizer 提供预定文本上下文的最佳方法是什么?

What is the best way to provide the LuisRecognizer with predetermined textual context other than ITurnContext RecognizeAsync derived from user input?

我完成 luis 调用的第一个想法是设置 turnContext,但它的大部分属性都是只读的。我也知道如何创建一个确切的上下文,该上下文将根据用户输入创建,主要是他们放入的文本,以提供传递给 LuisRecognizer 所需的上下文。

我的第二个想法来自调用 LuisHelper(stepContext.Context) 的瀑布步骤,也是手动设置它。你不能,因为它也是只读的 stepContext.Result...

所以我的问题是有没有办法为 luisRecognizer 提供一个可以附加到用户答案的​​文本短语。

例子...如果我问用户你在找什么颜色的车。我知道我的意图是 CarColor 所以如果用户说......蓝色那么我想附加到该声明客户想要蓝色的汽车颜色......这样我就可以抽象出蓝色的实体并知道我是指的是 CarColor 的意图。只是清楚我为什么要这样做。

有什么方法可以获取用户响应并向其附加文本,然后将其作为短语发送到 LuisRecognizer 调用。

这里有一些代码供参考:

     private async Task<DialogTurnResult> ActStepAsync(WaterfallStepContext stepContext, CancellationToken cancellationToken)
            {
                stepContext.Values["tester"] = "Travel to Chicago";

                stepContext.Result = "christian";

                // Call LUIS and gather any potential booking details. (Note the TurnContext has the response to the prompt.)
                var bookingDetails = stepContext.Result != null
                        ?
                    await LuisHelper.ExecuteLuisQuery(Configuration, Logger, stepContext.Context, cancellationToken)
                        :
                    new BookingDetails();

                // In this sample we only have a single Intent we are concerned with. However, typically a scenario
                // will have multiple different Intents each corresponding to starting a different child Dialog.

                // Run the BookingDialog giving it whatever details we have from the LUIS call, it will fill out the remainder.
                return await stepContext.BeginDialogAsync(nameof(BookingDialog), bookingDetails, cancellationToken);
            }

如何

您应该可以通过设置 Context 属性 下的 ActivityText 属性 来实现此目的:

stepContext.Context.Activity.Text = "The phrase that you want to pass through here";

你调用LuisHelper.ExecuteLuisQuery之前做这个分配,否则你更新的Text值将不会被发送。


为什么这应该有效

由于LuisHelper.ExecuteLuisQuery(Configuration, Logger, stepContext.Context, cancellationToken)穿过stepContext.Context,在幕后here this context is passed into the RecognizeAsync call inside of the ExecuteLuisQuery method. Furthermore the recognizer variable is of type LuisRecognizer, the source code for this class is available here. The line that you are interested in is this one显示turnContextText属性被用作通过的话语。


源代码explanation/Extra信息

供将来参考(以防代码或链接更改)源代码的简化版本是:

public virtual async Task<RecognizerResult> RecognizeAsync(ITurnContext turnContext, CancellationToken cancellationToken)
            => await RecognizeInternalAsync(turnContext, null, null, null, cancellationToken).ConfigureAwait(false);

其中 RecognizeInteralAsync 看起来像:

private async Task<RecognizerResult> RecognizeInternalAsync(ITurnContext turnContext, LuisPredictionOptions predictionOptions, Dictionary<string, string> telemetryProperties, Dictionary<string, double> telemetryMetrics, CancellationToken cancellationToken)
{
    var luisPredictionOptions = predictionOptions == null ? _options : MergeDefaultOptionsWithProvidedOptions(_options, predictionOptions);

    BotAssert.ContextNotNull(turnContext);

    if (turnContext.Activity.Type != ActivityTypes.Message)
    {
        return null;
    }

    // !! THIS IS THE IMPORTANT LINE !!
    var utterance = turnContext.Activity?.AsMessageActivity()?.Text;
    RecognizerResult recognizerResult;
    LuisResult luisResult = null;

    if (string.IsNullOrWhiteSpace(utterance))
    {
        recognizerResult = new RecognizerResult
        {
            Text = utterance,
            Intents = new Dictionary<string, IntentScore>() { { string.Empty, new IntentScore() { Score = 1.0 } } },
            Entities = new JObject(),
        };
    }
    else
    {
        luisResult = await _runtime.Prediction.ResolveAsync(
            _application.ApplicationId,
            utterance,
            timezoneOffset: luisPredictionOptions.TimezoneOffset,
            verbose: luisPredictionOptions.IncludeAllIntents,
            staging: luisPredictionOptions.Staging,
            spellCheck: luisPredictionOptions.SpellCheck,
            bingSpellCheckSubscriptionKey: luisPredictionOptions.BingSpellCheckSubscriptionKey,
            log: luisPredictionOptions.Log ?? true,
            cancellationToken: cancellationToken).ConfigureAwait(false);

        recognizerResult = new RecognizerResult
        {
            Text = utterance,
            AlteredText = luisResult.AlteredQuery,
            Intents = LuisUtil.GetIntents(luisResult),
            Entities = LuisUtil.ExtractEntitiesAndMetadata(luisResult.Entities, luisResult.CompositeEntities, luisPredictionOptions.IncludeInstanceData ?? true),
        };
        LuisUtil.AddProperties(luisResult, recognizerResult);
        if (_includeApiResults)
        {
            recognizerResult.Properties.Add("luisResult", luisResult);
        }
    }

    // Log telemetry code
    await OnRecognizerResultAsync(recognizerResult, turnContext, telemetryProperties, telemetryMetrics, cancellationToken).ConfigureAwait(false);

    var traceInfo = JObject.FromObject(
        new
        {
            recognizerResult,
            luisModel = new
            {
                ModelID = _application.ApplicationId,
            },
            luisOptions = luisPredictionOptions,
            luisResult,
        });

    await turnContext.TraceActivityAsync("LuisRecognizer", traceInfo, LuisTraceType, LuisTraceLabel, cancellationToken).ConfigureAwait(false);
    return recognizerResult;
}