由于未知原因,Dialogflow 代理不会在函数内部说话
Dialogflow agent will not speak inside a function for unknown reason
我有一个可以正常说话的 Dialogflow 代理。然而,当在函数内部(调用 Spotify api)时,它不会说出我在 "agent.add()" 中写的任何内容。
更奇怪的是,在我的 Firebase 控制台上,Spotify api 调用的输出实际上是在 "console.log" 中记录的。这意味着 Spotify api 调用功能正常,但 dialogflow 代理无法读出 SPotify api 调用的结果 - 我不知道为什么(下面的重要代码)。
/**
* ---------------------------Google Assistant Fulfillment----------------------------------------------------------------------------------------
* Below is the dialogflow firebase fulfillment code which controls what happens when various intents happen:
*/
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({request, response});
/**
* Function controls when the user replies 'yes' to 'Would you like to hear an angry song?'.
* Uses the random number within the bounds of the angry songs to select and recommend a song
* for the user.
* @param agent The dialogflow agent
* @returns {Promise<admin.database.DataSnapshot | never>} The song of the desired emotion.
*/
//4
async function playAngrySong(agent) {
return admin.database().ref(`${randomNumber}`).once('value').then((snapshot) => {
// Get the song, artist and spotify uri (with and without the preceding characters) from the Firebase Realtime Database
const song = snapshot.child('song').val();
const artist = snapshot.child('artist').val();
const spotify_uri = snapshot.child('spotifyCode').val();
const just_uri = snapshot.child('spotifyCode').val();
// Agent vocalises the retrieved song to the user
agent.add(`I reccomend ${song} by ${artist}`);
var tempo = '';
agent.add(`Here is the tempo for the song (before getAudioAnalysisForTrack call: ${tempo}`);
/**
* Callout to the Spotify api using the spotify-web-api node package. LINK PACKAGE.
* Agent vocalises the analysis extracted on the track.
*/
Spotify.getAudioAnalysisForTrack('4AKUOaCRcoKTFnVI9LtsrN').then(
function (data) {
var analysis = console.log('Analyser Version', data.body.meta.analyzer_version);
var temp = console.log('Track tempo', data.body.track.tempo);
tempo = data.body.track.tempo;
agent.add(
`The track's tempo is, ${tempo}, does this sound good or would you prefer something else?`
);
var textResponse = `The track's tempo is, ${tempo}, does this sound good or would you prefer something else?`;
agent.add(textResponse);
agent.add(`Here is the song's tempo: ${tempo}`);
return;
},
function (err) {
console.error(err);
}
);
// agent.add(`${agentSays}`);
agent.add(`Here is the tempo for the song: ${tempo}`);
});
});
}
所以在上面的代码中,google 询问用户是否需要推荐一首愤怒的歌曲。他们说 'yes' 运行这个函数 'playAngrySong'。从数据库中选择一首歌曲,并告知用户推荐的歌曲,例如 "I recommend Suck My Kiss by Red Hot Chili Peppers"。从代码中的这一点开始(它说的是 var tempo),代理不再说话(通过语音文本)。
console.log 行被写入函数日志,但是:
var analysis = console.log('Analyser Version', data.body.meta.analyzer_version);
var temp = console.log('Track tempo', data.body.track.tempo);
最后,Google 支持人员发送此邮件是为了回复我的问题(此后没有给我回邮件)- 有谁知道我应该根据他们的建议做什么?我是 Javascript 的新手,所以尝试在函数前添加 'async' 关键字(如此处的代码所示),但我认为这是正确的使用方法可能是错误的。
你的函数 returns 一个 Promise<void>
而你需要一个 Promise<DatabaseSnapshot>
。 admin.database().ref(
${randomNumber}).once('value')
已在您的 playAngrySong 函数中解决。我会将您的代码重构为类似于下面的示例。请注意,代码未经测试。
/**
* ---------------------------Google Assistant Fulfillment----------------------------------------------------------------------------------------
* Below is the dialogflow firebase fulfillment code which controls what happens when various intents happen:
*/
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({request, response});
/**
* Function controls when the user replies 'yes' to 'Would you like to hear an angry song?'.
* Uses the random number within the bounds of the angry songs to select and recommend a song
* for the user.
* @returns {Promise<admin.database.DataSnapshot | never>} The song of the desired emotion.
*/
//4
async function playAngrySong(agent) {
let tempo = '';
try{
const snapshot = await admin.database().ref(`${randomNumber}`).once('value');
// Get the song, artist and spotify uri (with and without the preceding characters) from the Firebase Realtime Database
const song = snapshot.child('song').val();
const artist = snapshot.child('artist').val();
const spotify_uri = snapshot.child('spotifyCode').val();
const just_uri = snapshot.child('spotifyCode').val();
agent.add(`I reccomend ${song} by ${artist}`);
agent.add(`Here is the tempo for the song (before getAudioAnalysisForTrack call: ${tempo}`);
}catch(exception){
throw {
message:'Failed to read song info',
innerException:exception
};
}
/**
* Callout to the Spotify api using the spotify-web-api node package. LINK PACKAGE.
* Agent vocalises the analysis extracted on the track.
*/
try{
const audioAnalysis = await Spotify.getAudioAnalysisForTrack('4AKUOaCRcoKTFnVI9LtsrN')
var analysis = console.log('Analyser Version', audioAnalysis.body.meta.analyzer_version);
var temp = console.log('Track tempo', audioAnalysis.body.track.tempo);
tempo = audioAnalysis.body.track.tempo;
agent.add( `The track's tempo is, ${tempo}, does this sound good or would you prefer something else?` );
agent.add(`The track's tempo is, ${tempo}, does this sound good or would you prefer something else?` );
agent.add(`Here is the song's tempo: ${tempo}`);
}catch(exception){
throw {
message:'Failed to connect to spotify',
innerException:exception
};
}
}
playAngrySong(agent)
.then(x=>{
//add your logic
response.status(200).send();
})
.catch(x=>{
//add error handling
response.status(400).send(x.message);
});
});
我认为最好将其分解为更小的功能(例如 databaseAccess、SpotifyConnect),但这超出了本问题的范围。
除上述问题外,我们发现代理从未 'reached' 代码的说话部分,因为在代理可以说 [=13] 的响应之前,函数必须完全执行=] 电话是。我了解到,必须在对话流为您提供的 5 秒响应 window 内使用 api 调用,否则程序将崩溃或代理将静音。因此,确保意图被很好地规划出来,也许在早期的意图中为未来的意图进行必要的 api 调用,并将结果存储在 class 变量中供以后使用 - 这就是我所做的现在一切正常!
我有一个可以正常说话的 Dialogflow 代理。然而,当在函数内部(调用 Spotify api)时,它不会说出我在 "agent.add()" 中写的任何内容。
更奇怪的是,在我的 Firebase 控制台上,Spotify api 调用的输出实际上是在 "console.log" 中记录的。这意味着 Spotify api 调用功能正常,但 dialogflow 代理无法读出 SPotify api 调用的结果 - 我不知道为什么(下面的重要代码)。
/**
* ---------------------------Google Assistant Fulfillment----------------------------------------------------------------------------------------
* Below is the dialogflow firebase fulfillment code which controls what happens when various intents happen:
*/
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({request, response});
/**
* Function controls when the user replies 'yes' to 'Would you like to hear an angry song?'.
* Uses the random number within the bounds of the angry songs to select and recommend a song
* for the user.
* @param agent The dialogflow agent
* @returns {Promise<admin.database.DataSnapshot | never>} The song of the desired emotion.
*/
//4
async function playAngrySong(agent) {
return admin.database().ref(`${randomNumber}`).once('value').then((snapshot) => {
// Get the song, artist and spotify uri (with and without the preceding characters) from the Firebase Realtime Database
const song = snapshot.child('song').val();
const artist = snapshot.child('artist').val();
const spotify_uri = snapshot.child('spotifyCode').val();
const just_uri = snapshot.child('spotifyCode').val();
// Agent vocalises the retrieved song to the user
agent.add(`I reccomend ${song} by ${artist}`);
var tempo = '';
agent.add(`Here is the tempo for the song (before getAudioAnalysisForTrack call: ${tempo}`);
/**
* Callout to the Spotify api using the spotify-web-api node package. LINK PACKAGE.
* Agent vocalises the analysis extracted on the track.
*/
Spotify.getAudioAnalysisForTrack('4AKUOaCRcoKTFnVI9LtsrN').then(
function (data) {
var analysis = console.log('Analyser Version', data.body.meta.analyzer_version);
var temp = console.log('Track tempo', data.body.track.tempo);
tempo = data.body.track.tempo;
agent.add(
`The track's tempo is, ${tempo}, does this sound good or would you prefer something else?`
);
var textResponse = `The track's tempo is, ${tempo}, does this sound good or would you prefer something else?`;
agent.add(textResponse);
agent.add(`Here is the song's tempo: ${tempo}`);
return;
},
function (err) {
console.error(err);
}
);
// agent.add(`${agentSays}`);
agent.add(`Here is the tempo for the song: ${tempo}`);
});
});
}
所以在上面的代码中,google 询问用户是否需要推荐一首愤怒的歌曲。他们说 'yes' 运行这个函数 'playAngrySong'。从数据库中选择一首歌曲,并告知用户推荐的歌曲,例如 "I recommend Suck My Kiss by Red Hot Chili Peppers"。从代码中的这一点开始(它说的是 var tempo),代理不再说话(通过语音文本)。
console.log 行被写入函数日志,但是:
var analysis = console.log('Analyser Version', data.body.meta.analyzer_version);
var temp = console.log('Track tempo', data.body.track.tempo);
最后,Google 支持人员发送此邮件是为了回复我的问题(此后没有给我回邮件)- 有谁知道我应该根据他们的建议做什么?我是 Javascript 的新手,所以尝试在函数前添加 'async' 关键字(如此处的代码所示),但我认为这是正确的使用方法可能是错误的。
你的函数 returns 一个 Promise<void>
而你需要一个 Promise<DatabaseSnapshot>
。 admin.database().ref(
${randomNumber}).once('value')
已在您的 playAngrySong 函数中解决。我会将您的代码重构为类似于下面的示例。请注意,代码未经测试。
/**
* ---------------------------Google Assistant Fulfillment----------------------------------------------------------------------------------------
* Below is the dialogflow firebase fulfillment code which controls what happens when various intents happen:
*/
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({request, response});
/**
* Function controls when the user replies 'yes' to 'Would you like to hear an angry song?'.
* Uses the random number within the bounds of the angry songs to select and recommend a song
* for the user.
* @returns {Promise<admin.database.DataSnapshot | never>} The song of the desired emotion.
*/
//4
async function playAngrySong(agent) {
let tempo = '';
try{
const snapshot = await admin.database().ref(`${randomNumber}`).once('value');
// Get the song, artist and spotify uri (with and without the preceding characters) from the Firebase Realtime Database
const song = snapshot.child('song').val();
const artist = snapshot.child('artist').val();
const spotify_uri = snapshot.child('spotifyCode').val();
const just_uri = snapshot.child('spotifyCode').val();
agent.add(`I reccomend ${song} by ${artist}`);
agent.add(`Here is the tempo for the song (before getAudioAnalysisForTrack call: ${tempo}`);
}catch(exception){
throw {
message:'Failed to read song info',
innerException:exception
};
}
/**
* Callout to the Spotify api using the spotify-web-api node package. LINK PACKAGE.
* Agent vocalises the analysis extracted on the track.
*/
try{
const audioAnalysis = await Spotify.getAudioAnalysisForTrack('4AKUOaCRcoKTFnVI9LtsrN')
var analysis = console.log('Analyser Version', audioAnalysis.body.meta.analyzer_version);
var temp = console.log('Track tempo', audioAnalysis.body.track.tempo);
tempo = audioAnalysis.body.track.tempo;
agent.add( `The track's tempo is, ${tempo}, does this sound good or would you prefer something else?` );
agent.add(`The track's tempo is, ${tempo}, does this sound good or would you prefer something else?` );
agent.add(`Here is the song's tempo: ${tempo}`);
}catch(exception){
throw {
message:'Failed to connect to spotify',
innerException:exception
};
}
}
playAngrySong(agent)
.then(x=>{
//add your logic
response.status(200).send();
})
.catch(x=>{
//add error handling
response.status(400).send(x.message);
});
});
我认为最好将其分解为更小的功能(例如 databaseAccess、SpotifyConnect),但这超出了本问题的范围。
除上述问题外,我们发现代理从未 'reached' 代码的说话部分,因为在代理可以说 [=13] 的响应之前,函数必须完全执行=] 电话是。我了解到,必须在对话流为您提供的 5 秒响应 window 内使用 api 调用,否则程序将崩溃或代理将静音。因此,确保意图被很好地规划出来,也许在早期的意图中为未来的意图进行必要的 api 调用,并将结果存储在 class 变量中供以后使用 - 这就是我所做的现在一切正常!