使用 KSQL 将 table 从 Kafka 转储到 MariaDB
Dump table from Kafka into MariaDB with KSQL
我正在使用 KSQL 进行聚合,需要在 MariaDB 中保存输出 table。我已经设置了 MariaDB 和 JdbcSinkConnector。不幸的是,水槽不适合我。
这是 table 在 KSQL 中的结构,我想将其转储到 MariaDB 中:
Field | Type
--------------------------------------------
a | VARCHAR(STRING) (primary key)
b | VARCHAR(STRING) (primary key)
c | VARCHAR(STRING) (primary key)
d | INTEGER
--------------------------------------------
我按 a、b、c 列进行分组并进行一些聚合,即 d 列。
这是连接器:
create sink connector test with (
'tasks.max' = 1,
'key.converter.schema.registry.url' = 'http://schema-registry:8081',
'value.converter.schema.registry.url' = 'http://schema-registry:8081',
'connector.class' = 'io.confluent.connect.jdbc.JdbcSinkConnector',
'key.converter'='org.apache.kafka.connect.storage.StringConverter',
'value.converter'='org.apache.kafka.connect.storage.StringConverter',
'key.converter.schemas.enable' = 'false',
'value.converter.schemas.enable' = 'true',
'config.action.reload' = 'restart',
'errors.log.enable' = 'true',
'errors.log.include.messages' = 'true',
'print.key' = 'true',
'errors.tolerance' = 'all',
'topics' = 'my-topic',
'connection.url' = 'jdbc:mysql://mariadb-docker-container:3306/ksql?autoReconnect=true&useSSL=false',
'connection.user' = 'root',
'connection.password' = 'strongest-password-you-have-ever-seen',
'pk.fields' = 'a, b, c',
'pk.mode' = 'record_value',
'delete.enabled' = 'false');
运行 这个连接器给我以下错误:
kafka-connect | [2021-03-08 12:13:51,339] ERROR [TEST|task-0] WorkerSinkTask{id=TEST-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:187)
kafka-connect | org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
kafka-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:591)
kafka-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)
kafka-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:229)
kafka-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
kafka-connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
kafka-connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
kafka-connect | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
kafka-connect | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
kafka-connect | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
kafka-connect | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
kafka-connect | at java.base/java.lang.Thread.run(Thread.java:834)
kafka-connect | Caused by: org.apache.kafka.connect.errors.ConnectException: Sink connector 'TEST' is configured with 'delete.enabled=false' and 'pk.mode=record_value' and therefore requires records with a non-null Struct value and non-null Struct schema, but found record at (topic='my-topic',partition=1,offset=30,timestamp=1615205630513) with a String value and string value schema.
kafka-connect | at io.confluent.connect.jdbc.sink.RecordValidator.lambda$requiresValue(RecordValidator.java:86)
kafka-connect | at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:82)
kafka-connect | at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:73)
kafka-connect | at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:75)
kafka-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:563)
kafka-connect | ... 10 more
kafka-connect | [2021-03-08 12:13:51,339] ERROR [TEST|task-0] WorkerSinkTask{id=TEST-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:188)
kafka-connect | [2021-03-08 12:13:51,339] INFO [TEST|task-0] Stopping task (io.confluent.connect.jdbc.sink.JdbcSinkTask:119)
kafka-connect | [2021-03-08 12:13:51,339] INFO [TEST|task-0] Closing connection #1 to MySql (io.confluent.connect.jdbc.util.CachedConnectionProvider:108)
kafka-connect | [2021-03-08 12:13:51,340] INFO [TEST|task-0] [Consumer clientId=connector-consumer-TEST-0, groupId=connect-TEST] Revoke previously assigned partitions my-topic-1, my-topic-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:307)
kafka-connect | [2021-03-08 12:13:51,340] INFO [TEST|task-0] [Consumer clientId=connector-consumer-TEST-0, groupId=connect-TEST] Member connector-consumer-TEST-0 sending LeaveGroup request to coordinator broker:29092 (id: 2147483646 rack: null) due to the consumer is being closed (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:1010)
我也尝试过使用 AvroConverter os JsonConverter,但这没有帮助,他们只是抛出其他错误(解析错误),并没有多大帮助。根据一些研究,我知道这与数据结构有关。但是,如果 table 来自分组列和聚合,我如何支持提供结构或将 table 转换为“结构”?
有什么想法吗?我也在考虑简单地不使用连接器,而是编写一个小程序来提取 tables / 主题并将它们写入 MariaDB,以防连接器无法工作。
如果您正在使用 JDBC 接收器,您需要为包含架构的数据使用序列化格式,例如使用 Avro、Protobuf 或 JSON 架构。
在 ksqlDB 中,您可以在创建对象时指定:
CREATE TABLE MY_TABLE WITH (FORMAT='AVRO') AS
SELECT A,B,C,COUNT(*) AS D
FROM STREAM_FOO
GROUP BY A,B,C;
请注意,在 ksqlDB 0.15.
中添加了 Avro 密钥支持
现在您的数据在 Avro 中,您可以使用适当的转换器创建接收器连接器
create sink connector test with (
'connector.class' = 'io.confluent.connect.jdbc.JdbcSinkConnector',
'tasks.max' = 1,
'key.converter.schema.registry.url' = 'http://schema-registry:8081',
'value.converter.schema.registry.url' = 'http://schema-registry:8081',
'key.converter' = 'io.confluent.connect.avro.AvroConverter',
'value.converter' = 'io.confluent.connect.avro.AvroConverter',
'key.converter.schemas.enable' = 'false',
'value.converter.schemas.enable' = 'true',
'config.action.reload' = 'restart',
'errors.log.enable' = 'true',
'errors.log.include.messages' = 'true',
'print.key' = 'true',
'errors.tolerance' = 'all',
'topics' = 'my-topic',
'connection.url' = 'jdbc:mysql://mariadb-docker-container:3306/ksql?autoReconnect=true&useSSL=false',
'connection.user' = 'root',
'connection.password' = 'strongest-password-you-have-ever-seen',
'pk.fields' = 'a, b, c',
'pk.mode' = 'record_key',
'delete.enabled' = 'false');
您遇到的两个问题:
- 使用
StringConverter
意味着不存在模式,因此错误消息中的报告 String value and string value schema.
。
- 一个table的key(
GROUP BY
)写入了Kafka消息的key,所以从这里(record_key
) 你应该得到 pk.fields
.
参考:
我正在使用 KSQL 进行聚合,需要在 MariaDB 中保存输出 table。我已经设置了 MariaDB 和 JdbcSinkConnector。不幸的是,水槽不适合我。
这是 table 在 KSQL 中的结构,我想将其转储到 MariaDB 中:
Field | Type
--------------------------------------------
a | VARCHAR(STRING) (primary key)
b | VARCHAR(STRING) (primary key)
c | VARCHAR(STRING) (primary key)
d | INTEGER
--------------------------------------------
我按 a、b、c 列进行分组并进行一些聚合,即 d 列。
这是连接器:
create sink connector test with (
'tasks.max' = 1,
'key.converter.schema.registry.url' = 'http://schema-registry:8081',
'value.converter.schema.registry.url' = 'http://schema-registry:8081',
'connector.class' = 'io.confluent.connect.jdbc.JdbcSinkConnector',
'key.converter'='org.apache.kafka.connect.storage.StringConverter',
'value.converter'='org.apache.kafka.connect.storage.StringConverter',
'key.converter.schemas.enable' = 'false',
'value.converter.schemas.enable' = 'true',
'config.action.reload' = 'restart',
'errors.log.enable' = 'true',
'errors.log.include.messages' = 'true',
'print.key' = 'true',
'errors.tolerance' = 'all',
'topics' = 'my-topic',
'connection.url' = 'jdbc:mysql://mariadb-docker-container:3306/ksql?autoReconnect=true&useSSL=false',
'connection.user' = 'root',
'connection.password' = 'strongest-password-you-have-ever-seen',
'pk.fields' = 'a, b, c',
'pk.mode' = 'record_value',
'delete.enabled' = 'false');
运行 这个连接器给我以下错误:
kafka-connect | [2021-03-08 12:13:51,339] ERROR [TEST|task-0] WorkerSinkTask{id=TEST-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:187)
kafka-connect | org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
kafka-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:591)
kafka-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)
kafka-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:229)
kafka-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
kafka-connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
kafka-connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
kafka-connect | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
kafka-connect | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
kafka-connect | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
kafka-connect | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
kafka-connect | at java.base/java.lang.Thread.run(Thread.java:834)
kafka-connect | Caused by: org.apache.kafka.connect.errors.ConnectException: Sink connector 'TEST' is configured with 'delete.enabled=false' and 'pk.mode=record_value' and therefore requires records with a non-null Struct value and non-null Struct schema, but found record at (topic='my-topic',partition=1,offset=30,timestamp=1615205630513) with a String value and string value schema.
kafka-connect | at io.confluent.connect.jdbc.sink.RecordValidator.lambda$requiresValue(RecordValidator.java:86)
kafka-connect | at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:82)
kafka-connect | at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:73)
kafka-connect | at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:75)
kafka-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:563)
kafka-connect | ... 10 more
kafka-connect | [2021-03-08 12:13:51,339] ERROR [TEST|task-0] WorkerSinkTask{id=TEST-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:188)
kafka-connect | [2021-03-08 12:13:51,339] INFO [TEST|task-0] Stopping task (io.confluent.connect.jdbc.sink.JdbcSinkTask:119)
kafka-connect | [2021-03-08 12:13:51,339] INFO [TEST|task-0] Closing connection #1 to MySql (io.confluent.connect.jdbc.util.CachedConnectionProvider:108)
kafka-connect | [2021-03-08 12:13:51,340] INFO [TEST|task-0] [Consumer clientId=connector-consumer-TEST-0, groupId=connect-TEST] Revoke previously assigned partitions my-topic-1, my-topic-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:307)
kafka-connect | [2021-03-08 12:13:51,340] INFO [TEST|task-0] [Consumer clientId=connector-consumer-TEST-0, groupId=connect-TEST] Member connector-consumer-TEST-0 sending LeaveGroup request to coordinator broker:29092 (id: 2147483646 rack: null) due to the consumer is being closed (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:1010)
我也尝试过使用 AvroConverter os JsonConverter,但这没有帮助,他们只是抛出其他错误(解析错误),并没有多大帮助。根据一些研究,我知道这与数据结构有关。但是,如果 table 来自分组列和聚合,我如何支持提供结构或将 table 转换为“结构”?
有什么想法吗?我也在考虑简单地不使用连接器,而是编写一个小程序来提取 tables / 主题并将它们写入 MariaDB,以防连接器无法工作。
如果您正在使用 JDBC 接收器,您需要为包含架构的数据使用序列化格式,例如使用 Avro、Protobuf 或 JSON 架构。
在 ksqlDB 中,您可以在创建对象时指定:
CREATE TABLE MY_TABLE WITH (FORMAT='AVRO') AS
SELECT A,B,C,COUNT(*) AS D
FROM STREAM_FOO
GROUP BY A,B,C;
请注意,在 ksqlDB 0.15.
中添加了 Avro 密钥支持现在您的数据在 Avro 中,您可以使用适当的转换器创建接收器连接器
create sink connector test with (
'connector.class' = 'io.confluent.connect.jdbc.JdbcSinkConnector',
'tasks.max' = 1,
'key.converter.schema.registry.url' = 'http://schema-registry:8081',
'value.converter.schema.registry.url' = 'http://schema-registry:8081',
'key.converter' = 'io.confluent.connect.avro.AvroConverter',
'value.converter' = 'io.confluent.connect.avro.AvroConverter',
'key.converter.schemas.enable' = 'false',
'value.converter.schemas.enable' = 'true',
'config.action.reload' = 'restart',
'errors.log.enable' = 'true',
'errors.log.include.messages' = 'true',
'print.key' = 'true',
'errors.tolerance' = 'all',
'topics' = 'my-topic',
'connection.url' = 'jdbc:mysql://mariadb-docker-container:3306/ksql?autoReconnect=true&useSSL=false',
'connection.user' = 'root',
'connection.password' = 'strongest-password-you-have-ever-seen',
'pk.fields' = 'a, b, c',
'pk.mode' = 'record_key',
'delete.enabled' = 'false');
您遇到的两个问题:
- 使用
StringConverter
意味着不存在模式,因此错误消息中的报告String value and string value schema.
。 - 一个table的key(
GROUP BY
)写入了Kafka消息的key,所以从这里(record_key
) 你应该得到pk.fields
.
参考: