Confluent kafkarest ERROR 服务器意外死亡至少需要配置 bootstrap.servers 或 zookeeper.connect 之一

Confluent kafkarest ERROR Server died unexpectedly Atleast one of bootstrap.servers or zookeeper.connect needs to be configured

我正在通过 Confluent 平台 运行ning Kafka。我已按照此处记录的步骤进行操作,https://docs.confluent.io/2.0.0/quickstart.html#quickstart

启动动物园管理员,

$ sudo ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties

启动kafka,

$ sudo ./bin/kafka-server-start ./etc/kafka/server.properties

开始schema-registry命令,

$ sudo ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties

全部运行都很好。

接下来我想 运行 REST 代理命令,按照此处的记录,https://docs.confluent.io/2.0.0/kafka-rest/docs/intro.html#quickstart

$ sudo bin/kafka-rest-start

但是此命令失败并出现以下错误。 (错误服务器意外死亡:(io.confluent.kafkarest.KafkaRestMain:63) java.lang.RuntimeException:至少需要配置bootstrap.servers或zookeeper.connect之一)。

都运行都很好。我不明白为什么会出现此错误,您能帮忙解决一下吗?

ESDGH-C02K648W:confluent-4.0.0 user$ sudo bin/kafka-rest-start
[2018-01-09 14:44:06,922] INFO KafkaRestConfig values: 
    metric.reporters = []
    client.security.protocol = PLAINTEXT
    bootstrap.servers = 
    response.mediatype.default = application/vnd.kafka.v1+json
    authentication.realm = 
    ssl.keystore.type = JKS
    metrics.jmx.prefix = kafka.rest
    ssl.truststore.password = [hidden]
    id = 
    host.name = 
    consumer.request.max.bytes = 67108864
    client.ssl.truststore.location = 
    ssl.endpoint.identification.algorithm = 
    compression.enable = false
    client.zk.session.timeout.ms = 30000
    client.ssl.keystore.type = JKS
    client.ssl.cipher.suites = 
    client.ssl.keymanager.algorithm = SunX509
    client.ssl.protocol = TLS
    response.mediatype.preferred = [application/vnd.kafka.v1+json, application/vnd.kafka+json, application/json]
    client.sasl.kerberos.ticket.renew.window.factor = 0.8
    ssl.truststore.type = JKS
    consumer.iterator.backoff.ms = 50
    access.control.allow.origin = 
    ssl.truststore.location = 
    ssl.keystore.password = [hidden]
    zookeeper.connect = 
    port = 8082
    client.ssl.keystore.password = [hidden]
    client.ssl.provider = 
    client.init.timeout.ms = 60000
    simpleconsumer.pool.size.max = 25
    simpleconsumer.pool.timeout.ms = 1000
    ssl.client.auth = false
    consumer.iterator.timeout.ms = 1
    client.sasl.kerberos.service.name = 
    ssl.trustmanager.algorithm = 
    authentication.method = NONE
    schema.registry.url = http://localhost:8081
    client.ssl.truststore.type = JKS
    request.logger.name = io.confluent.rest-utils.requests
    ssl.key.password = [hidden]
    client.sasl.kerberos.ticket.renew.jitter = 0.05
    client.ssl.endpoint.identification.algorithm = 
    authentication.roles = [*]
    client.ssl.trustmanager.algorithm = PKIX
    metrics.num.samples = 2
    consumer.threads = 1
    ssl.protocol = TLS
    client.ssl.keystore.location = 
    debug = false
    listeners = []
    ssl.provider = 
    ssl.enabled.protocols = []
    client.sasl.kerberos.min.time.before.relogin = 60000
    producer.threads = 5
    shutdown.graceful.ms = 1000
    ssl.keystore.location = 
    consumer.request.timeout.ms = 1000
    ssl.cipher.suites = []
    client.timeout.ms = 500
    consumer.instance.timeout.ms = 300000
    client.sasl.kerberos.kinit.cmd = /usr/bin/kinit
    client.ssl.key.password = [hidden]
    access.control.allow.methods = 
    ssl.keymanager.algorithm = 
    metrics.sample.window.ms = 30000
    client.ssl.truststore.password = [hidden]
    client.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
    kafka.rest.resource.extension.class = 
    client.sasl.mechanism = GSSAPI
 (io.confluent.kafkarest.KafkaRestConfig:175)
[2018-01-09 14:44:06,954] INFO Logging initialized @402ms (org.eclipse.jetty.util.log:186)
[2018-01-09 14:44:07,154] ERROR Server died unexpectedly:  (io.confluent.kafkarest.KafkaRestMain:63)
java.lang.RuntimeException: Atleast one of bootstrap.servers or zookeeper.connect needs to be configured
    at io.confluent.kafkarest.KafkaRestApplication.setupInjectedResources(KafkaRestApplication.java:104)
    at io.confluent.kafkarest.KafkaRestApplication.setupResources(KafkaRestApplication.java:83)
    at io.confluent.kafkarest.KafkaRestApplication.setupResources(KafkaRestApplication.java:45)
    at io.confluent.rest.Application.createServer(Application.java:157)
    at io.confluent.rest.Application.start(Application.java:495)
    at io.confluent.kafkarest.KafkaRestMain.main(KafkaRestMain.java:56)
ESDGH-C02K648W:confluent-4.0.0 user$ 

kafka-rest-start 脚本将属性文件作为参数。这在您链接的快速入门中有进一步的记录。

kafka-rest-start 脚本将属性文件作为参数。您必须在命令行中传递 ./etc/kafka-rest/kafka-rest.properties。

bin/kafka-rest-start ./etc/kafka-rest/kafka-rest.properties