为什么 Spring 云数据流教程没有显示预期结果

Why is Spring cloud data flow tutorial not showing expected result

我正在尝试完成第一个 spring 云数据流教程,但我没有在教程中得到结果。

https://dataflow.spring.io/docs/stream-developer-guides/streams/

教程让我对 http 源使用 curl 并在带有 stdout 文件尾部的日志接收器中查看结果。

我没有看到结果。我在日志中看到了启动。

我跟踪日志 docker exec -it skipper tail -f /path/from/stdout/textbox/in/dashboard

我进入 curl http://localhost:20100 -H "Content-type: text/plain" -d "Happy streaming"

我只看到

2020-10-05 16:30:03.315  INFO 110 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka version : 2.0.1
2020-10-05 16:30:03.316  INFO 110 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId : fa14705e51bd2ce5
2020-10-05 16:30:03.322  INFO 110 --- [           main] o.s.s.c.ThreadPoolTaskScheduler          : Initializing ExecutorService
2020-10-05 16:30:03.338  INFO 110 --- [           main] s.i.k.i.KafkaMessageDrivenChannelAdapter : started org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter@106faf11
2020-10-05 16:30:03.364  INFO 110 --- [container-0-C-1] org.apache.kafka.clients.Metadata        : Cluster ID: 2J0QTxzQQmm2bLxFKgRwmA
2020-10-05 16:30:03.574  INFO 110 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 20041 (http) with context path ''
2020-10-05 16:30:03.584  INFO 110 --- [           main] o.s.c.s.a.l.s.k.LogSinkKafkaApplication  : Started LogSinkKafkaApplication in 38.086 seconds (JVM running for 40.251)
2020-10-05 16:30:05.852  INFO 110 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-3, groupId=http-ingest] Discovered group coordinator kafka-broker:9092 (id: 2147482646 rack: null)
2020-10-05 16:30:05.857  INFO 110 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-3, groupId=http-ingest] Revoking previously assigned partitions []
2020-10-05 16:30:05.858  INFO 110 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder  : partitions revoked: []
2020-10-05 16:30:05.858  INFO 110 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-3, groupId=http-ingest] (Re-)joining group
2020-10-05 16:30:08.943  INFO 110 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-3, groupId=http-ingest] Successfully joined group with generation 1
2020-10-05 16:30:08.945  INFO 110 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-3, groupId=http-ingest] Setting newly assigned partitions [http-ingest.http-0]
2020-10-05 16:30:08.964  INFO 110 --- [container-0-C-1] o.a.k.c.consumer.internals.Fetcher       : [Consumer clientId=consumer-3, groupId=http-ingest] Resetting offset for partition http-ingest.http-0 to offset 0.
2020-10-05 16:30:08.981  INFO 110 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder  : partitions assigned: [http-ingest.http-0]

没有快乐的流媒体

有什么建议吗?

感谢您试用开发者指南!

据我所知,SCDF 中的 http | log 流定义似乎是在没有明确端口的情况下提交的。在这种情况下,当 http-sourcelog-sink 应用程序启动时,Spring Boot 会随机分配一个端口。

如果您导航到 http-source 应用程序日志,您将看到列出的应用程序端口,这就是您要在 CURL 命令中使用的端口。

指南中对此有如下说明,供您参考。

If you use the local Data Flow Server, add the following deployment property to set the port to avoid a port collision.

或者,您可以在定义中使用显式端口部署流。例如:http --server.port=9004 | log。这样,您的 CURL 将是:

curl http://localhost:9004 -H "Content-type: text/plain" -d "Happy streaming"