读取字段 'topic_metadata' 时出错:读取大小为 873589 的数组时出错,只有 41 个字节可用

Error reading field 'topic_metadata': Error reading array of size 873589, only 41 bytes available

我已经通过在安装了新 Ubuntu 的虚拟机中下载 zip 文件来安装 logstash 版本 5.2.2 .

我创建了一个示例配置文件 logstash-sample.conf,其中包含以下条目

input{
        stdin{ }
}
output{
        stdout{ }
}

并执行命令$bin/logstash -f logstash-simple.conf 运行 绝对没问题。

现在在同一台 Ubuntu 机器上,我按照上面提到的完全相同的过程安装了 kafka 在这里 https://www.digitalocean.com/community/tutorials/how-to-install-apache-kafka-on-ubuntu-14-04 然后一直走到第 7 步。

然后我修改了 logstash-sample.conf 文件以包含以下内容

input {
        kafka{
                bootstrap_servers => "localhost:9092"
                topics => ["TutorialTopic"]
        }
}
output {
        stdout { codec => rubydebug }
}

这次我得到以下 错误,

sample@sample-VirtualBox:~/Downloads/logstash-5.2.2$ bin/logstash -f logstash-sample.conf

Sending Logstash's logs to /home/rs-switch/Downloads/logstash-5.2.2/logs which is now configured via log4j2.properties
[2017-03-07T00:26:25,629][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-03-07T00:26:25,650][INFO ][logstash.pipeline        ] Pipeline main started
[2017-03-07T00:26:26,039][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
log4j:WARN No appenders could be found for logger (org.apache.kafka.clients.consumer.ConsumerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "Ruby-0-Thread-14: /home/rs-switch/Downloads/logstash-5.2.2/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.6/lib/logstash/inputs/kafka.rb:229" org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 873589, only 41 bytes available
        at org.apache.kafka.common.protocol.types.Schema.read(org/apache/kafka/common/protocol/types/Schema.java:73)
        at org.apache.kafka.clients.NetworkClient.parseResponse(org/apache/kafka/clients/NetworkClient.java:380)
        at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(org/apache/kafka/clients/NetworkClient.java:449)
        at org.apache.kafka.clients.NetworkClient.poll(org/apache/kafka/clients/NetworkClient.java:269)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:360)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:224)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:192)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:163)
        at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java:179)
        at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(org/apache/kafka/clients/consumer/KafkaConsumer.java:974)
        at org.apache.kafka.clients.consumer.KafkaConsumer.poll(org/apache/kafka/clients/consumer/KafkaConsumer.java:938)
        at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
        at RUBY.thread_runner(/home/rs-switch/Downloads/logstash-5.2.2/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.6/lib/logstash/inputs/kafka.rb:239)
        at java.lang.Thread.run(java/lang/Thread.java:745)
[2017-03-07T00:26:28,742][WARN ][logstash.agent           ] stopping pipeline {:id=>"main"}

谁能帮我解决这个问题?过去几周我一直在设置 ELK,但没有成功。

您很可能遇到了导致此问题的版本冲突。查看 Logstash Kafka 输入插件文档中的 compatibility matrix

您在安装 Kafka 时提到的 link 要求您安装 0.8.2.1 版本,该版本不适用于 Kafka 0.10 客户端。 Kafka 具有版本检查和向后兼容性,但前提是代理比客户端更新,这里不是这种情况。 我建议安装当前版本的 Kafka,自版本 0.8 以来有了巨大的改进,如果您尝试降级 Logstash,您将错过这些改进。

查看 Confluent Platform Quickstart 以轻松入门。