如何从 Docker 容器连接到 Confluent Cloud
How to connect to Confluent Cloud from a Docker container
我已经在 Confluent 云 (https://confluent.cloud/) 上设置了一个 Kafka 主题,并且可以 connect/send 使用以下配置向该主题发送消息:
kafka-config.properties:
# Kafka
ssl.endpoint.identification.algorithm=
bootstrap.servers=pkc-4yyd6.us-east1.gcp.confluent.cloud:9092
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="uname" password="pwd";
sasl.mechanism=PLAIN
从我收到的 docker 容器连接:
Failed to produce: org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
搜索上述错误表明 ssl.endpoint.identification.algorithm= 应该修复
这是我的Docker文件:
FROM ysihaoy/scala-play:2.12.2-2.6.0-sbt-0.13.15
COPY ["build.sbt", "/tmp/build/"]
COPY ["project/plugins.sbt", "project/build.properties", "/tmp/build/project/"]
COPY . /root/app/
WORKDIR /root/app
CMD ["sbt" , "run"]
我构建并 运行 容器使用:
docker build -t kafkatest .
docker run -it kafkatest
是否需要额外的配置才能连接到 Confluent Kafka?
我在本地构建时没有收到这个问题(不使用 Docker)。
更新:
这是我用来构建属性的 Scala 源代码:
def buildProperties(): Properties = {
val kafkaPropertiesFile = Source.fromResource("kafka-config.properties")
val properties: Properties = new Properties
properties.load(kafkaPropertiesFile.bufferedReader())
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.connect.json.JsonSerializer")
properties
}
更新 2:
def buildProperties(): Properties = {
val kafkaPropertiesFile = Source.fromResource("kafka-config.properties")
val properties: Properties = new Properties
properties.load(kafkaPropertiesFile.bufferedReader())
println("bootstrap.servers:"+properties.get("bootstrap.servers"))
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.connect.json.JsonSerializer")
properties
}
找到 属性 bootstrap.servers
,因此将文件添加到容器中。
更新 3 :
sasl.jaas.config:org.apache.kafka.common.security.plain.PlainLoginModule required username="Q763KBPRI" password="bFehkfL/J6m8L2aukX+A/L59LAYb/bWr"
更新4:
docker run -it kafkatest --network host
returns 错误:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"--network\": executable file not found in $PATH": unknown.
使用不同的基本图像解决了问题,但我不确定是哪个差异导致了分辨率。这是我更新的 Dockerfile:
ARG OPENJDK_TAG=8u232
FROM openjdk:${OPENJDK_TAG}
ARG SBT_VERSION=1.4.1
# Install sbt
RUN \
mkdir /working/ && \
cd /working/ && \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb && \
apt-get update && \
apt-get install sbt && \
cd && \
rm -r /working/ && \
sbt sbtVersion
COPY ["build.sbt", "/tmp/build/"]
COPY ["project/plugins.sbt", "project/build.properties", "/tmp/build/project/"]
COPY . /root/app/
WORKDIR /root/app
CMD ["sbt" , "run"]
我已经在 Confluent 云 (https://confluent.cloud/) 上设置了一个 Kafka 主题,并且可以 connect/send 使用以下配置向该主题发送消息:
kafka-config.properties:
# Kafka
ssl.endpoint.identification.algorithm=
bootstrap.servers=pkc-4yyd6.us-east1.gcp.confluent.cloud:9092
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="uname" password="pwd";
sasl.mechanism=PLAIN
从我收到的 docker 容器连接:
Failed to produce: org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
搜索上述错误表明 ssl.endpoint.identification.algorithm= 应该修复
这是我的Docker文件:
FROM ysihaoy/scala-play:2.12.2-2.6.0-sbt-0.13.15
COPY ["build.sbt", "/tmp/build/"]
COPY ["project/plugins.sbt", "project/build.properties", "/tmp/build/project/"]
COPY . /root/app/
WORKDIR /root/app
CMD ["sbt" , "run"]
我构建并 运行 容器使用:
docker build -t kafkatest .
docker run -it kafkatest
是否需要额外的配置才能连接到 Confluent Kafka?
我在本地构建时没有收到这个问题(不使用 Docker)。
更新:
这是我用来构建属性的 Scala 源代码:
def buildProperties(): Properties = {
val kafkaPropertiesFile = Source.fromResource("kafka-config.properties")
val properties: Properties = new Properties
properties.load(kafkaPropertiesFile.bufferedReader())
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.connect.json.JsonSerializer")
properties
}
更新 2:
def buildProperties(): Properties = {
val kafkaPropertiesFile = Source.fromResource("kafka-config.properties")
val properties: Properties = new Properties
properties.load(kafkaPropertiesFile.bufferedReader())
println("bootstrap.servers:"+properties.get("bootstrap.servers"))
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.connect.json.JsonSerializer")
properties
}
找到 属性 bootstrap.servers
,因此将文件添加到容器中。
更新 3 :
sasl.jaas.config:org.apache.kafka.common.security.plain.PlainLoginModule required username="Q763KBPRI" password="bFehkfL/J6m8L2aukX+A/L59LAYb/bWr"
更新4:
docker run -it kafkatest --network host
returns 错误:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"--network\": executable file not found in $PATH": unknown.
使用不同的基本图像解决了问题,但我不确定是哪个差异导致了分辨率。这是我更新的 Dockerfile:
ARG OPENJDK_TAG=8u232
FROM openjdk:${OPENJDK_TAG}
ARG SBT_VERSION=1.4.1
# Install sbt
RUN \
mkdir /working/ && \
cd /working/ && \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb && \
apt-get update && \
apt-get install sbt && \
cd && \
rm -r /working/ && \
sbt sbtVersion
COPY ["build.sbt", "/tmp/build/"]
COPY ["project/plugins.sbt", "project/build.properties", "/tmp/build/project/"]
COPY . /root/app/
WORKDIR /root/app
CMD ["sbt" , "run"]