Debezium postgres kafka 连接器因 java 堆 space 问题而失败
Debezium postgres kafka connector is failing with java heap space issue
我们在 Strimzi kafkaconnect 集群上有 13 个 kafka debezium postgres 连接器 运行。其中之一因 Caused by: java.lang.OutOfMemoryError: Java heap space
失败。将 jvm 选项从 2g 增加到 4g,但仍然因同样的问题而失败。
完整日志:
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Arrays.java:3664)
at java.lang.String.<init>(String.java:207)
at com.fasterxml.jackson.core.util.TextBuffer.setCurrentAndReturn(TextBuffer.java:696)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._finishAndReturnString(UTF8StreamJsonParser.java:2405)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.getValueAsString(UTF8StreamJsonParser.java:312)
at io.debezium.document.JacksonReader.parseArray(JacksonReader.java:219)
at io.debezium.document.JacksonReader.parseDocument(JacksonReader.java:131)
at io.debezium.document.JacksonReader.parseArray(JacksonReader.java:213)
at io.debezium.document.JacksonReader.parseDocument(JacksonReader.java:131)
at io.debezium.document.JacksonReader.parse(JacksonReader.java:102)
at io.debezium.document.JacksonReader.read(JacksonReader.java:72)
at io.debezium.connector.postgresql.connection.wal2json.NonStreamingWal2JsonMessageDecoder.processMessage(NonStreamingWal2JsonMessageDecoder.java:54)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.deserializeMessages(PostgresReplicationConnection.java:418)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.readPending(PostgresReplicationConnection.java:412)
at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:119)
at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start[=12=](ChangeEventSourceCoordinator.java:99)
at io.debezium.pipeline.ChangeEventSourceCoordinator$$Lambda4/1759003957.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)```
尝试在 Debezium 道具下方进行调整
- 增加
max.batch.size
- 减少
max.queue.size
- 调整您的
offset.flush.interval.ms
作为您的应用程序要求
您似乎收到了一条非常大的交易消息,但由于内存限制,解析失败。 wal2json_streaming
应该将消息分成更小的块以防止出现此问题。
一般情况下,如果可能,请使用 protobuf 或 pgoutput 解码器,因为它们是按更改而不是按事务从数据库流式传输消息。
我们在 Strimzi kafkaconnect 集群上有 13 个 kafka debezium postgres 连接器 运行。其中之一因 Caused by: java.lang.OutOfMemoryError: Java heap space
失败。将 jvm 选项从 2g 增加到 4g,但仍然因同样的问题而失败。
完整日志:
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Arrays.java:3664)
at java.lang.String.<init>(String.java:207)
at com.fasterxml.jackson.core.util.TextBuffer.setCurrentAndReturn(TextBuffer.java:696)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._finishAndReturnString(UTF8StreamJsonParser.java:2405)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.getValueAsString(UTF8StreamJsonParser.java:312)
at io.debezium.document.JacksonReader.parseArray(JacksonReader.java:219)
at io.debezium.document.JacksonReader.parseDocument(JacksonReader.java:131)
at io.debezium.document.JacksonReader.parseArray(JacksonReader.java:213)
at io.debezium.document.JacksonReader.parseDocument(JacksonReader.java:131)
at io.debezium.document.JacksonReader.parse(JacksonReader.java:102)
at io.debezium.document.JacksonReader.read(JacksonReader.java:72)
at io.debezium.connector.postgresql.connection.wal2json.NonStreamingWal2JsonMessageDecoder.processMessage(NonStreamingWal2JsonMessageDecoder.java:54)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.deserializeMessages(PostgresReplicationConnection.java:418)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.readPending(PostgresReplicationConnection.java:412)
at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:119)
at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start[=12=](ChangeEventSourceCoordinator.java:99)
at io.debezium.pipeline.ChangeEventSourceCoordinator$$Lambda4/1759003957.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)```
尝试在 Debezium 道具下方进行调整
- 增加
max.batch.size
- 减少
max.queue.size
- 调整您的
offset.flush.interval.ms
作为您的应用程序要求
您似乎收到了一条非常大的交易消息,但由于内存限制,解析失败。 wal2json_streaming
应该将消息分成更小的块以防止出现此问题。
一般情况下,如果可能,请使用 protobuf 或 pgoutput 解码器,因为它们是按更改而不是按事务从数据库流式传输消息。