Debezium MongoDB Connector Error: org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
Debezium MongoDB Connector Error: org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
我正在尝试使用 Transforms 为 MongoDB 部署一个新的 Debezium 连接器。配置如下所示:
{"name": "mongo_source_connector_autostate",
"config": {
"connector.class": "io.debezium.connector.mongodb.MongoDbConnector",
"tasks.max":1,
"initial.sync.max.threads":4,
"mongodb.hosts": "rs0/FE0VMC1980:27017",
"mongodb.name": "mongo",
"collection.whitelist": "DASMongoDB.*_AutoState",
"transforms": "unwrap",
"transforms.unwrap.type" : "io.debezium.connector.mongodb.transforms.UnwrapFromMongoDbEnvelope",
"transforms.sanitize.field.names" : true
}}
但是连接器失败并出现以下错误:
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:290)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:316)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:240)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.avro.SchemaParseException: Illegal initial character: 10019_AutoState
at org.apache.avro.Schema.validateName(Schema.java:1528)
at org.apache.avro.Schema.access0(Schema.java:87)
at org.apache.avro.Schema$Name.<init>(Schema.java:675)
at org.apache.avro.Schema.createRecord(Schema.java:212)
at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:893)
at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:732)
at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:726)
at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:365)
at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:80)
at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:62)
at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord(WorkerSourceTask.java:290)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
... 11 more
我已使用以下配置以分布式模式启动连接器:
...
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
...
注意:我有另一个没有任何转换的连接器。它运行得很好。
我想就此获得一些帮助。提前致谢。
什么 Debezium 版本?如果 1.1/1.2 有问题,请提出 Jira 问题。模式名称需要清理,在我看来,在这种情况下,错误来自集合名称 10019_AutoState
并且模式名称未清理,因为它不能以数字开头。
您的某个字段似乎违反了 Avro 命名规则。在你的情况下,它似乎是这个:
The name portion of a fullname, record field names, and enum symbols
must:
- start with
[A-Za-z_]
但是 10019_AutoState
违反了规则,因为它以数值开头。您可以将其更改为 AutoState10019
您可以查看包含所有记录字段命名限制的完整列表here。
我正在尝试使用 Transforms 为 MongoDB 部署一个新的 Debezium 连接器。配置如下所示:
{"name": "mongo_source_connector_autostate",
"config": {
"connector.class": "io.debezium.connector.mongodb.MongoDbConnector",
"tasks.max":1,
"initial.sync.max.threads":4,
"mongodb.hosts": "rs0/FE0VMC1980:27017",
"mongodb.name": "mongo",
"collection.whitelist": "DASMongoDB.*_AutoState",
"transforms": "unwrap",
"transforms.unwrap.type" : "io.debezium.connector.mongodb.transforms.UnwrapFromMongoDbEnvelope",
"transforms.sanitize.field.names" : true
}}
但是连接器失败并出现以下错误:
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:290)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:316)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:240)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.avro.SchemaParseException: Illegal initial character: 10019_AutoState
at org.apache.avro.Schema.validateName(Schema.java:1528)
at org.apache.avro.Schema.access0(Schema.java:87)
at org.apache.avro.Schema$Name.<init>(Schema.java:675)
at org.apache.avro.Schema.createRecord(Schema.java:212)
at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:893)
at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:732)
at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:726)
at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:365)
at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:80)
at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:62)
at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord(WorkerSourceTask.java:290)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
... 11 more
我已使用以下配置以分布式模式启动连接器:
...
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
...
注意:我有另一个没有任何转换的连接器。它运行得很好。
我想就此获得一些帮助。提前致谢。
什么 Debezium 版本?如果 1.1/1.2 有问题,请提出 Jira 问题。模式名称需要清理,在我看来,在这种情况下,错误来自集合名称 10019_AutoState
并且模式名称未清理,因为它不能以数字开头。
您的某个字段似乎违反了 Avro 命名规则。在你的情况下,它似乎是这个:
The name portion of a fullname, record field names, and enum symbols must:
- start with
[A-Za-z_]
但是 10019_AutoState
违反了规则,因为它以数值开头。您可以将其更改为 AutoState10019
您可以查看包含所有记录字段命名限制的完整列表here。