MongoDB 作为接收器连接器未按预期捕获数据 - kafka?

MongoDB as sink connector not capturing data as expected - kafka?

我目前使用 MySQL 数据库作为源连接器,使用下面的配置,我想监视对数据库的更改并将其发送到 mongoDB、

这是我的源连接器配置,

curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '''{
  "name": "source_mysql_connector",  
  "config": {  
    "connector.class": "io.debezium.connector.mysql.MySqlConnector",
    "tasks.max": "1",  
    "database.hostname": "host.docker.internal",  
    "database.port": "3306",
    "database.user": "test",
    "database.password": "$apr1$o7RbW.GvrPIY1",
    "database.server.id": "8111999",  
    "database.server.name": "db_source",  
    "database.include.list": "example",  
    "database.history.kafka.bootstrap.servers": "broker:29092",  
    "database.history.kafka.topic": "schema-changes.example",
    "database.allowPublicKeyRetrieval":"true",
    "include.schema.changes": "true"
  }
}'''

这是我的接收器连接器 (mongodb) 配置,

curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '''{
  "name": "sink_mongodb_connector",  
  "config": {  
      "connector.class": "com.mongodb.kafka.connect.MongoSinkConnector",
      "tasks.max":"1",
      "topics":"db_source.example.employees",
      "connection.uri":"mongodb://172.17.0.1:27017/example?w=1&journal=true",
      "database":"example",
      "collection":"employees",
      "value.converter": "io.confluent.connect.avro.AvroConverter",
      "value.converter.schema.registry.url": "http://schema-registry:8081"
  }
}'''

使用这个我能够建立连接并捕获数据更改并将它们存储到 mongodb collection 上 table 称为员工,

但是这里的问题是,当我检查 mongodb 中的 collection 时,文档是这样保存的,

{ "_id" : ObjectId("60d0e6939e00e22f274ccac1"), "before" : null, "after" : { "id" : NumberLong(11), "name" : "Steve Shining", "team" : "DevOps", "birthday" : 11477 }, "source" : { "version" : "1.5.0.Final", "connector" : "mysql", "name" : "db_source", "ts_ms" : NumberLong("1624303251000"), "snapshot" : "false", "db" : "example", "sequence" : null, "table" : "employees", "server_id" : NumberLong(6030811), "gtid" : null, "file" : "mysql-bin.000003", "pos" : NumberLong(5445), "row" : 2, "thread" : null, "query" : null }, "op" : "c", "ts_ms" : NumberLong("1624303251190"), "transaction" : null }

{ "_id" : ObjectId("60d0e6939e00e22f274ccac2"), "before" : null, "after" : { "id" : NumberLong(12), "name" : "John", "team" : "Support", "birthday" : 6270 }, "source" : { "version" : "1.5.0.Final", "connector" : "mysql", "name" : "db_source", "ts_ms" : NumberLong("1624303251000"), "snapshot" : "false", "db" : "example", "sequence" : null, "table" : "employees", "server_id" : NumberLong(6030811), "gtid" : null, "file" : "mysql-bin.000003", "pos" : NumberLong(5445), "row" : 3, "thread" : null, "query" : null }, "op" : "c", "ts_ms" : NumberLong("1624303251190"), "transaction" : null }

但是我的 mysql 数据库看起来像这样,

mysql> select * from employees;
+----+---------------+-----------+------------+------------+
| id   | name                | team          |  birthday   |
+----+---------------+-----------+------------+------------+
|  1    | Peter Smith     | DevOps     | 2003-07-21  |
| 11    | Steve Shining | DevOps     | 2001-06-04 |
| 12   | John                  | Support    | 1987-03-03  |
+----+---------------+-----------+------------+------------+

我希望我的 collection 看起来像这样,

{ "_id" : ObjectId("60d0e6939e00e22f274ccac2"), "name" : "John", "team" : "Support", "birthday" : "1987-03-03 "}

我在这里做错了什么?即使删除的消息这样存储在collection中,也无法识别消息和所有消息。我如何解决它?连日期都存储不正确?

已更新:

curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '''{
  "name": "sink_mongodb_connector",  
  "config": {  
      "connector.class": "com.mongodb.kafka.connect.MongoSinkConnector",
      "tasks.max":"1",
      "topics":"db_source.example.employees",
      "connection.uri":"mongodb://172.17.0.1:27017/example?w=1&journal=true",
      "database":"example",
      "collection":"employees",
      "value.converter": "io.confluent.connect.avro.AvroConverter",
      "value.converter.schema.registry.url": "http://schema-registry:8081",
      "transforms": "unwrap",
      "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
      "transforms.unwrap.drop.tombstones": "false",
      "transforms.unwrap.delete.handling.mode": "rewrite"
  }
}'''

问题与 Mongo 无关,但与默认的 Debezium 格式有关。

您看到的是之前、之后和其他 CDC 事件元数据。

not able to identify the message

,虽然... "after" : { "id" : NumberLong(12), "name" : "John", "team" : "Support", "birthday" : 6270 }

您需要 extract/flatten 事件,以便您 获得“之后”字段

https://debezium.io/documentation/reference/configuration/event-flattening.html


关于生日/日期值,似乎是一个单独的问题