Kafka connect docker 图像 - 找不到任何实现连接器且名称与 ElasticsearchSinkConnector 匹配的 class
Kafka connect docker image - Failed to find any class that implements Connector and which name matches ElasticsearchSinkConnector
我使用 kafka-connect 图像 confluentinc/cp-kafka-connect 已经有一段时间了。根据 Confluent 文档,此 docker 图像带有预安装的连接器插件,包括 Elastic。
我之前一直在使用版本 5.4.1-ccs
,效果很好,我可以添加弹性接收器连接器配置,它们工作得很好。但是,我已尝试将 confluentinc/cp-kafka-connect
更新为最新的 v6.0.1
,但现在出现错误。
ConnectException: Failed to find any class that implements Connector and which name matches ElasticsearchSinkConnector
我已经阅读了 Confluent 网站上的大量文档,但有点零星。我了解问题是插件未安装,因为它们已从新的 docker 图像中删除或路径错误(不确定是哪一个)。
我该如何解决这个问题? (注意:我也写了自己的 java 插件,所以两者都需要工作)
这是我当前的 docker-compose
文件(同样,这适用于版本 5.4.1-ccs
)
kafka-connect-node-1:
image: confluentinc/cp-kafka-connect:5.4.1 #using old version because of breaking change
hostname: kafka-connect-node-1
ports:
- '8083:8083'
environment:
CONNECT_BOOTSTRAP_SERVERS: [MY_SERVER]
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: connect-configs
CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: connect-status
CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: 'http://kafka-schema-registry:8084'
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://kafka-schema-registry:8084'
CONNECT_INTERNAL_KEY_CONVERTER: 'org.apache.kafka.connect.json.JsonConverter'
CONNECT_INTERNAL_VALUE_CONVERTER: 'org.apache.kafka.connect.json.JsonConverter'
CONNECT_REST_ADVERTISED_HOST_NAME: 'kafka-connect-node-1'
CONNECT_LOG4J_ROOT_LOGLEVEL: 'INFO'
CONNECT_LOG4J_LOGGERS: 'org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR'
CONNECT_PLUGIN_PATH: '/usr/share/java,/etc/kafka-connect/jars'
CONNECT_ZOOKEEPER_CONNECT: [MY_ZOOKEEPER]
volumes:
- /efs/connector:/etc/kafka-connect/jars/ # mounting my custom JAR
depends_on:
- kafka-schema-registry
- kafka-rest-proxy
Since Confluent Platform 6.0 connectors are no longer bundled,需要单独安装
你可以build your own image based on cp-kafka-connect-base
, or you can install connectors at runtime by overriding the image command thus:
kafka-connect:
image: confluentinc/cp-kafka-connect-base:6.0.0
container_name: kafka-connect
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: "broker:29092"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: _kafka-connect-group-01-configs
CONNECT_OFFSET_STORAGE_TOPIC: _kafka-connect-group-01-offsets
CONNECT_STATUS_STORAGE_TOPIC: _kafka-connect-group-01-status
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: "[%d] %p %X{connector.context}%m (%c:%L)%n"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_PLUGIN_PATH: '/usr/share/java,/connectors,/usr/share/confluent-hub-components/'
depends_on:
- broker
- schema-registry
volumes:
- $PWD/data/connectors/:/connectors/
command:
- bash
- -c
- |
echo "Installing Connector"
confluent-hub install --no-prompt confluentinc/kafka-connect-elasticsearch:10.0.1
#
echo "Launching Kafka Connect worker"
/etc/confluent/docker/run &
#
sleep infinity
我使用 kafka-connect 图像 confluentinc/cp-kafka-connect 已经有一段时间了。根据 Confluent 文档,此 docker 图像带有预安装的连接器插件,包括 Elastic。
我之前一直在使用版本 5.4.1-ccs
,效果很好,我可以添加弹性接收器连接器配置,它们工作得很好。但是,我已尝试将 confluentinc/cp-kafka-connect
更新为最新的 v6.0.1
,但现在出现错误。
ConnectException: Failed to find any class that implements Connector and which name matches ElasticsearchSinkConnector
我已经阅读了 Confluent 网站上的大量文档,但有点零星。我了解问题是插件未安装,因为它们已从新的 docker 图像中删除或路径错误(不确定是哪一个)。
我该如何解决这个问题? (注意:我也写了自己的 java 插件,所以两者都需要工作)
这是我当前的 docker-compose
文件(同样,这适用于版本 5.4.1-ccs
)
kafka-connect-node-1:
image: confluentinc/cp-kafka-connect:5.4.1 #using old version because of breaking change
hostname: kafka-connect-node-1
ports:
- '8083:8083'
environment:
CONNECT_BOOTSTRAP_SERVERS: [MY_SERVER]
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: connect-configs
CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: connect-status
CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: 'http://kafka-schema-registry:8084'
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://kafka-schema-registry:8084'
CONNECT_INTERNAL_KEY_CONVERTER: 'org.apache.kafka.connect.json.JsonConverter'
CONNECT_INTERNAL_VALUE_CONVERTER: 'org.apache.kafka.connect.json.JsonConverter'
CONNECT_REST_ADVERTISED_HOST_NAME: 'kafka-connect-node-1'
CONNECT_LOG4J_ROOT_LOGLEVEL: 'INFO'
CONNECT_LOG4J_LOGGERS: 'org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR'
CONNECT_PLUGIN_PATH: '/usr/share/java,/etc/kafka-connect/jars'
CONNECT_ZOOKEEPER_CONNECT: [MY_ZOOKEEPER]
volumes:
- /efs/connector:/etc/kafka-connect/jars/ # mounting my custom JAR
depends_on:
- kafka-schema-registry
- kafka-rest-proxy
Since Confluent Platform 6.0 connectors are no longer bundled,需要单独安装
你可以build your own image based on cp-kafka-connect-base
, or you can install connectors at runtime by overriding the image command thus:
kafka-connect:
image: confluentinc/cp-kafka-connect-base:6.0.0
container_name: kafka-connect
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: "broker:29092"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: _kafka-connect-group-01-configs
CONNECT_OFFSET_STORAGE_TOPIC: _kafka-connect-group-01-offsets
CONNECT_STATUS_STORAGE_TOPIC: _kafka-connect-group-01-status
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: "[%d] %p %X{connector.context}%m (%c:%L)%n"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_PLUGIN_PATH: '/usr/share/java,/connectors,/usr/share/confluent-hub-components/'
depends_on:
- broker
- schema-registry
volumes:
- $PWD/data/connectors/:/connectors/
command:
- bash
- -c
- |
echo "Installing Connector"
confluent-hub install --no-prompt confluentinc/kafka-connect-elasticsearch:10.0.1
#
echo "Launching Kafka Connect worker"
/etc/confluent/docker/run &
#
sleep infinity