org.apache.kafka.common.config.ConfigException:未知配置 'errors.deadletterqueue.topic.name'
org.apache.kafka.common.config.ConfigException: Unknown configuration 'errors.deadletterqueue.topic.name'
我正在使用 Kafka s3 连接器将 json 文件推送到 S3 存储桶中,但我无法建立连接器 运行。我使用的是 confluent 5.0 beta30 版本。
这是我的连接器配置。
{
"name": "custdb-s3-connector",
"config": {
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"key.converter":"org.apache.kafka.connect.json.JsonConverter",
"tasks.max": "1",
"topics": "CUST_ORDERS_ENRICHED",
"s3.region": "us-west-2",
"s3.bucket.name": "asif-datapipeline-demo",
"s3.part.size": "5242880",
"flush.size": "3",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"format.class": "io.confluent.connect.s3.format.json.JsonFormat",
"key.converter.schemas.enable":"false",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable":"false",
"partition.field.name": "CUSTOMER_NUM",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"schema.compatibility": "NONE"
}
}
在连接日志中,我们看到以下错误
connect | (org.apache.kafka.connect.runtime.errors.LogReporter$LogReporterConfig)
connect | [2018-08-02 18:49:18,307] ERROR Failed to start task custdb-s3-connector-0 (org.apache.kafka.connect.runtime.Worker)
connect | org.apache.kafka.common.config.ConfigException: Unknown configuration 'errors.deadletterqueue.topic.name'
connect | at org.apache.kafka.common.config.AbstractConfig.get(AbstractConfig.java:91)
connect | at org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig.get(ConnectorConfig.java:117)
connect | at org.apache.kafka.connect.runtime.ConnectorConfig.get(ConnectorConfig.java:162)
connect | at org.apache.kafka.common.config.AbstractConfig.getString(AbstractConfig.java:126)
connect | at org.apache.kafka.connect.runtime.Worker.sinkTaskReporters(Worker.java:531)
connect | at org.apache.kafka.connect.runtime.Worker.buildWorkerTask(Worker.java:508)
connect | at org.apache.kafka.connect.runtime.Worker.startTask(Worker.java:451)
connect | at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:873)
connect | at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access00(DistributedHerder.java:111)
connect | at org.apache.kafka.connect.runtime.distributed.DistributedHerder.call(DistributedHerder.java:888)
connect | at org.apache.kafka.connect.runtime.distributed.DistributedHerder.call(DistributedHerder.java:884)
connect | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
connect | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
connect | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
connect | at java.lang.Thread.run(Thread.java:748)
connect | [2018-08-02 18:49:18,293] INFO Instantiated connector mongodb-custdb-connector with version 0.9.0-SNAPSHOT o
您应该使用 Confluent 的 5.0 版,而不是 Beta 版,该问题已得到解决。
我正在使用 Kafka s3 连接器将 json 文件推送到 S3 存储桶中,但我无法建立连接器 运行。我使用的是 confluent 5.0 beta30 版本。
这是我的连接器配置。
{
"name": "custdb-s3-connector",
"config": {
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"key.converter":"org.apache.kafka.connect.json.JsonConverter",
"tasks.max": "1",
"topics": "CUST_ORDERS_ENRICHED",
"s3.region": "us-west-2",
"s3.bucket.name": "asif-datapipeline-demo",
"s3.part.size": "5242880",
"flush.size": "3",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"format.class": "io.confluent.connect.s3.format.json.JsonFormat",
"key.converter.schemas.enable":"false",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable":"false",
"partition.field.name": "CUSTOMER_NUM",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"schema.compatibility": "NONE"
}
}
在连接日志中,我们看到以下错误
connect | (org.apache.kafka.connect.runtime.errors.LogReporter$LogReporterConfig)
connect | [2018-08-02 18:49:18,307] ERROR Failed to start task custdb-s3-connector-0 (org.apache.kafka.connect.runtime.Worker)
connect | org.apache.kafka.common.config.ConfigException: Unknown configuration 'errors.deadletterqueue.topic.name'
connect | at org.apache.kafka.common.config.AbstractConfig.get(AbstractConfig.java:91)
connect | at org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig.get(ConnectorConfig.java:117)
connect | at org.apache.kafka.connect.runtime.ConnectorConfig.get(ConnectorConfig.java:162)
connect | at org.apache.kafka.common.config.AbstractConfig.getString(AbstractConfig.java:126)
connect | at org.apache.kafka.connect.runtime.Worker.sinkTaskReporters(Worker.java:531)
connect | at org.apache.kafka.connect.runtime.Worker.buildWorkerTask(Worker.java:508)
connect | at org.apache.kafka.connect.runtime.Worker.startTask(Worker.java:451)
connect | at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:873)
connect | at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access00(DistributedHerder.java:111)
connect | at org.apache.kafka.connect.runtime.distributed.DistributedHerder.call(DistributedHerder.java:888)
connect | at org.apache.kafka.connect.runtime.distributed.DistributedHerder.call(DistributedHerder.java:884)
connect | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
connect | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
connect | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
connect | at java.lang.Thread.run(Thread.java:748)
connect | [2018-08-02 18:49:18,293] INFO Instantiated connector mongodb-custdb-connector with version 0.9.0-SNAPSHOT o
您应该使用 Confluent 的 5.0 版,而不是 Beta 版,该问题已得到解决。