谷歌样本上的基本动作?

Bare bones Actions on Googles sample?

我需要一个 Actions on Google 示例来展示如何使用 main 在此处找到的 Google Javascript 客户端库上的操作:

https://github.com/actions-on-google/actions-on-google-nodejs

我需要示例来告诉我如何执行以下操作,仅此而已

此处有 Google 个示例的操作页面:

https://github.com/actions-on-google

我经历了很多,问题是他们使用了我不需要的模块和服务。这是他们使用的服务列表我不想要并且只会妨碍:

- Firebase Cloud Functions (I will be hosting my own backend server to manage the conversation with Google)

- Api.ai (or any similar service).  We have our own natural language processing and conversation flow management code

- Console.  Same as above

感谢 William DePalo 在 GitHub 上为我们这些想要托管我们自己的外部 Node.js 服务器以处理来自 Google 操作的履行请求的人提供了这个简单的示例:

https://github.com/unclewill/parrot/blob/master/app.js

这是 Google+ 上的 post,他基本上告诉我如何使用它:

https://plus.google.com/u/0/101564662004489946938/posts/BgWMEovmfyC

以下是他使用 post 中的代码的一般说明:

"I put this TOY up on Github whose only trick is that it is an assistant app, built using plain vanilla Node and Express in less than 50 lines.It doesn't use Firebase or Google Cloud Functions or API.AI and it doesn't do anything except repeat what it hears. It was intended for a SHORT presentation at a user group meeting which didn't happen.But it should get you started.

It's action package is really overkill for a sample. It defines a custom intent (SCHEDULE_QUERY) which is a no-op in the sample but which I was going to use to bloviate about at the meeting.

At the risk of stating the obvious, it is in the function textIntent() where you should starting thinking about how you integrate your NLP. In my app I have a hearAndReply() function in its own module which takes the text the recognizer heard and a session object and which returns text and updated state in the session. If you do that you should be able to target that other assistant with the less capable but somewhat more stable software fairly easily."