无法在 Docker 中的日志文件中写入日志信息
Cannot write log info in log file in Docker
我在 Docker 中工作时遇到有关生成日志信息的问题。在本地主机的日志文件中写入日志没有问题。
我在 docker 期间执行 CRUD 过程时看不到任何新日志。
如何将日志文件(Springboot-Elk.log)连接到Docker?
我该如何解决?
这是显示屏幕截图的文件:Link
这是我的项目 link : My Project
下面是docker-compose.yml
version: '3.8'
services:
logstash:
image: docker.elastic.co/logstash/logstash:7.15.2
user: root
command: -f /etc/logstash/conf.d/
volumes:
- ./elk/logstash/:/etc/logstash/conf.d/
- ./Springboot-Elk.log:/tmp/logs/Springboot-Elk.log
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
depends_on:
- elasticsearch
filebeat:
build:
context: ./filebeat
dockerfile: Dockerfile
links:
- "logstash:logstash"
volumes:
- /var/run/docker.sock:/host_docker/docker.sock
- /var/lib/docker:/host_docker/var/lib/docker
depends_on:
- logstash
kibana:
image: docker.elastic.co/kibana/kibana:7.15.2
user: root
volumes:
- ./elk/kibana/:/usr/share/kibana/config/
ports:
- "5601:5601"
depends_on:
- elasticsearch
entrypoint: ["./bin/kibana", "--allow-root"]
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
user: root
volumes:
- ./elk/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
app:
image: 'springbootelk:latest'
build:
context: .
dockerfile: Dockerfile
container_name: SpringBootElk
depends_on:
- db
- logstash
ports:
- '8077:8077'
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://db:3306/springbootexample?useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=Turkey
- SPRING_DATASOURCE_USERNAME=springexample
- SPRING_DATASOURCE_PASSWORD=111111
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
db:
container_name: db
image: 'mysql:latest'
ports:
- "3366:3306"
restart: always
environment:
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
volumes:
- db-data:/var/lib/mysql
# Volumes
volumes:
db-data:
下面是logstash.conf
input {
beats {
port => 5000
}
file {
path => "/tmp/logs/Springboot-Elk.log"
sincedb_path => "/dev/null"
start_position => "beginning"
}
}
output {
stdout{
codec => rubydebug
}
elasticsearch {
hosts => "elasticsearch:9200"
index => "dockerlogs"
}
}
filebeat.yml文件如下所示。
filebeat.inputs:
- type: docker
enabled: true
containers:
ids:
- "*"
path: "/host_docker/var/lib/docker/containers"
processors:
- add_docker_metadata:
host: "unix:///host_docker/docker.sock"
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: ["logstash:5000"]
log files:
logging.level: info
logging.to_files: false
logging.to_syslog: false
loggins.metrice.enabled: false
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
ssl.verification_mode: none
这是filebeat.yml
的Docker文件
FROM docker.elastic.co/beats/filebeat:7.15.2
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN mkdir /usr/share/filebeat/dockerlogs
RUN chown -R root /usr/share/filebeat/
RUN chmod -R go-w /usr/share/filebeat/
因为我想在 logstash 中查看日志,所以我 运行 这个命令 docker container logs -f 。
我在那里看不到在 PersonController 和 PersonService 中定义的任何日志。
这是屏幕截图
使用docker时,最好将所有日志写入控制台。这将允许它在 运行 时在 Kubernetes 或其他协调器中公开日志。在 spring 框架上,您可以通过更改为 ConsoleAppender 来实现此目的。下面的示例显示了如何在 log4j.xml 中实现这一点。将文件放入您的资源文件夹并添加 log4j 依赖项(参考:https://www.baeldung.com/spring-boot-logging):
<configuration>
<appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<timeZone>UTC</timeZone>
</encoder>
</appender>
<logger name="com.yourcompany.packagename" level="INFO">
<appender-ref ref="consoleAppender" />
</logger>
<root level="ERROR">
<appender-ref ref="consoleAppender" />
</root>
</configuration>
您仍然可以通过在上面的配置中添加另一个附加程序来将日志配置到磁盘,但是您需要在您的 docker-compose 文件中添加一个挂载点以指向应用程序上的日志目录。
值得注意的是 Docker 是无状态的,因此当您重新启动容器时日志会丢失。
对我来说很管用。但是我的 ELK Stack 是 运行 而不是 docker.
这是我的 logstash 配置(TCP 和 UDP 的配置相同):
input {
tcp {
port => 5144
codec => "json"
type => "logback"
}
udp {
port => 5144
codec => "json"
type => "logback"
}
}
output {
if [type]=="logback" {
elasticsearch {
hosts => ["localhost:9200"]
index => "logback-%{+YYYY.MM.dd}"
}
}
}
而且你还必须在 Kibana 中设置 logback 索引。
这是我的 logback-spring.xml:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<springProperty name="mvnVersion" source="info.app.version"/>
<springProperty name="appName" source="info.app.name"/>
<springProperty name="tcpLogHost" source="logstash.tcpHost"/>
<springProperty name="udpLogHost" source="logstash.udpHost"/>
<springProperty name="udpLogPort" source="logstash.udpPort"/>
<appender name="stashUdp" class="net.logstash.logback.appender.LogstashUdpSocketAppender">
<host>${udpLogHost}</host>
<port>${udpLogPort}</port>
<layout class="net.logstash.logback.layout.LogstashLayout">
<customFields>{"appName":"${appName}","env":"${env}","mvnVersion":"${mvnVersion}"}</customFields>
</layout>
</appender>
<appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{100} - %msg%n</pattern>
</encoder>
</appender>
<root level="debug">
<appender-ref ref="stashUdp" />
<appender-ref ref="stdout" />
</root>
</configuration>
我正在直接 (UDP) 登录到 ELK Stack。
你需要
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>${logstash.logback.version}</version>
</dependency>
根据您的存储库,您有 logback
日志记录配置,但排除了 logback
并在您的 Maven 依赖项中添加了 log4j
支持。要使 logback
工作,您只需要不排除默认日志记录库,并添加更新版本的 logback
依赖项(因为由 spring 指定的 logback 启动依赖项版本不包含指定的您的配置 logstash appender - net.logstash.logback.appender.LogstashTcpSocketAppender)。
F.e.:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.6.6</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.spingbootelk</groupId>
<artifactId>main</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>main</name>
<description>The usage of ELK in Spring Boot</description>
<properties>
<java.version>11</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>7.1.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
但是最好在 Internet 上找到一个很好的指南,了解如何正确设置 spring 启动应用程序到 logstash 的日志记录。而且我不建议您将存储库中的配置用于生产目的。
下面是我的回答。
修改如下所示的 logstash.conf 文件后,我的问题消失了。
input {
tcp {
port => 5000
}
beats {
port => 5044
}
file {
path => "/tmp/logs/Springboot-Elk.log"
sincedb_path => "/dev/null"
start_position => "beginning"
}
}
output {
stdout{
codec => rubydebug
}
elasticsearch {
hosts => "elasticsearch:9200"
index => "dockerlogs"
}
}
我在 Docker 中工作时遇到有关生成日志信息的问题。在本地主机的日志文件中写入日志没有问题。
我在 docker 期间执行 CRUD 过程时看不到任何新日志。
如何将日志文件(Springboot-Elk.log)连接到Docker?
我该如何解决?
这是显示屏幕截图的文件:Link
这是我的项目 link : My Project
下面是docker-compose.yml
version: '3.8'
services:
logstash:
image: docker.elastic.co/logstash/logstash:7.15.2
user: root
command: -f /etc/logstash/conf.d/
volumes:
- ./elk/logstash/:/etc/logstash/conf.d/
- ./Springboot-Elk.log:/tmp/logs/Springboot-Elk.log
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
depends_on:
- elasticsearch
filebeat:
build:
context: ./filebeat
dockerfile: Dockerfile
links:
- "logstash:logstash"
volumes:
- /var/run/docker.sock:/host_docker/docker.sock
- /var/lib/docker:/host_docker/var/lib/docker
depends_on:
- logstash
kibana:
image: docker.elastic.co/kibana/kibana:7.15.2
user: root
volumes:
- ./elk/kibana/:/usr/share/kibana/config/
ports:
- "5601:5601"
depends_on:
- elasticsearch
entrypoint: ["./bin/kibana", "--allow-root"]
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
user: root
volumes:
- ./elk/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
app:
image: 'springbootelk:latest'
build:
context: .
dockerfile: Dockerfile
container_name: SpringBootElk
depends_on:
- db
- logstash
ports:
- '8077:8077'
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://db:3306/springbootexample?useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=Turkey
- SPRING_DATASOURCE_USERNAME=springexample
- SPRING_DATASOURCE_PASSWORD=111111
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
db:
container_name: db
image: 'mysql:latest'
ports:
- "3366:3306"
restart: always
environment:
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
volumes:
- db-data:/var/lib/mysql
# Volumes
volumes:
db-data:
下面是logstash.conf
input {
beats {
port => 5000
}
file {
path => "/tmp/logs/Springboot-Elk.log"
sincedb_path => "/dev/null"
start_position => "beginning"
}
}
output {
stdout{
codec => rubydebug
}
elasticsearch {
hosts => "elasticsearch:9200"
index => "dockerlogs"
}
}
filebeat.yml文件如下所示。
filebeat.inputs:
- type: docker
enabled: true
containers:
ids:
- "*"
path: "/host_docker/var/lib/docker/containers"
processors:
- add_docker_metadata:
host: "unix:///host_docker/docker.sock"
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: ["logstash:5000"]
log files:
logging.level: info
logging.to_files: false
logging.to_syslog: false
loggins.metrice.enabled: false
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
ssl.verification_mode: none
这是filebeat.yml
的Docker文件FROM docker.elastic.co/beats/filebeat:7.15.2
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN mkdir /usr/share/filebeat/dockerlogs
RUN chown -R root /usr/share/filebeat/
RUN chmod -R go-w /usr/share/filebeat/
因为我想在 logstash 中查看日志,所以我 运行 这个命令 docker container logs -f 。
我在那里看不到在 PersonController 和 PersonService 中定义的任何日志。
这是屏幕截图
使用docker时,最好将所有日志写入控制台。这将允许它在 运行 时在 Kubernetes 或其他协调器中公开日志。在 spring 框架上,您可以通过更改为 ConsoleAppender 来实现此目的。下面的示例显示了如何在 log4j.xml 中实现这一点。将文件放入您的资源文件夹并添加 log4j 依赖项(参考:https://www.baeldung.com/spring-boot-logging):
<configuration>
<appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<timeZone>UTC</timeZone>
</encoder>
</appender>
<logger name="com.yourcompany.packagename" level="INFO">
<appender-ref ref="consoleAppender" />
</logger>
<root level="ERROR">
<appender-ref ref="consoleAppender" />
</root>
</configuration>
您仍然可以通过在上面的配置中添加另一个附加程序来将日志配置到磁盘,但是您需要在您的 docker-compose 文件中添加一个挂载点以指向应用程序上的日志目录。
值得注意的是 Docker 是无状态的,因此当您重新启动容器时日志会丢失。
对我来说很管用。但是我的 ELK Stack 是 运行 而不是 docker.
这是我的 logstash 配置(TCP 和 UDP 的配置相同):
input {
tcp {
port => 5144
codec => "json"
type => "logback"
}
udp {
port => 5144
codec => "json"
type => "logback"
}
}
output {
if [type]=="logback" {
elasticsearch {
hosts => ["localhost:9200"]
index => "logback-%{+YYYY.MM.dd}"
}
}
}
而且你还必须在 Kibana 中设置 logback 索引。
这是我的 logback-spring.xml:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<springProperty name="mvnVersion" source="info.app.version"/>
<springProperty name="appName" source="info.app.name"/>
<springProperty name="tcpLogHost" source="logstash.tcpHost"/>
<springProperty name="udpLogHost" source="logstash.udpHost"/>
<springProperty name="udpLogPort" source="logstash.udpPort"/>
<appender name="stashUdp" class="net.logstash.logback.appender.LogstashUdpSocketAppender">
<host>${udpLogHost}</host>
<port>${udpLogPort}</port>
<layout class="net.logstash.logback.layout.LogstashLayout">
<customFields>{"appName":"${appName}","env":"${env}","mvnVersion":"${mvnVersion}"}</customFields>
</layout>
</appender>
<appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{100} - %msg%n</pattern>
</encoder>
</appender>
<root level="debug">
<appender-ref ref="stashUdp" />
<appender-ref ref="stdout" />
</root>
</configuration>
我正在直接 (UDP) 登录到 ELK Stack。
你需要
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>${logstash.logback.version}</version>
</dependency>
根据您的存储库,您有 logback
日志记录配置,但排除了 logback
并在您的 Maven 依赖项中添加了 log4j
支持。要使 logback
工作,您只需要不排除默认日志记录库,并添加更新版本的 logback
依赖项(因为由 spring 指定的 logback 启动依赖项版本不包含指定的您的配置 logstash appender - net.logstash.logback.appender.LogstashTcpSocketAppender)。
F.e.:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.6.6</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.spingbootelk</groupId>
<artifactId>main</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>main</name>
<description>The usage of ELK in Spring Boot</description>
<properties>
<java.version>11</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>7.1.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
但是最好在 Internet 上找到一个很好的指南,了解如何正确设置 spring 启动应用程序到 logstash 的日志记录。而且我不建议您将存储库中的配置用于生产目的。
下面是我的回答。
修改如下所示的 logstash.conf 文件后,我的问题消失了。
input {
tcp {
port => 5000
}
beats {
port => 5044
}
file {
path => "/tmp/logs/Springboot-Elk.log"
sincedb_path => "/dev/null"
start_position => "beginning"
}
}
output {
stdout{
codec => rubydebug
}
elasticsearch {
hosts => "elasticsearch:9200"
index => "dockerlogs"
}
}