Kafka 服务器更改日志级别

Kafka server change log level

我正在尝试更改 Kafka 服务器中的日志记录级别,因为日志太冗长了。我查看了哪些 classes 在 DEBUG 级别登录并获得了日志行数,例如:

kafka.cluster.Partition 1235094
o.apache.kafka.clients.NetworkClient    70375
o.a.k.clients.FetchSessionHandler   69363
kafka.log.LogCleanerManager$    56400

例如,来自 kafka.cluster.Partition class 记录器的日志行如下所示:

21:41:01.041 [data-plane-kafka-request-handler-4] DEBUG kafka.cluster.Partition - [Partition __transaction_state-43 broker=3] Recorded replica 1 log end offset (LEO) position 0 and log start offset 0.

我尝试通过添加以下行来配置 log4j.properties

log4j.logger.kafka.cluster.Partition=INFO
log4j.additivity.kafka.cluster.Partition=false

我原以为 kafka.cluster.Partition 只记录 INFO 级别的日志。相反,我发现它仍然以 DEBUG 级别登录。

我该如何解决这个问题?

使用卡夫卡 3.0.0

根据评论中的要求,在下面分享完整的 log4.properties。我相信这与 Kafka 服务器附带的默认版本非常接近。

请注意,我们的 运行 任何服务器的公司框架都将 stdout 和 stderr 重定向到单个应用程序日志文件,因此我们指定哪个附加程序可能无关紧要。我想要做的是过滤记录哪些行,这不应该取决于使用哪个附加程序。

kafka.logs.dir=logs

log4j.rootLogger=INFO, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

# Turn on all our debugging info
#log4j.logger.kafka.producer.async.DefaultEventHandler=DEBUG, kafkaAppender
#log4j.logger.kafka.client.ClientUtils=DEBUG, kafkaAppender
#log4j.logger.kafka.perf=DEBUG, kafkaAppender
#log4j.logger.kafka.perf.ProducerPerformance$ProducerThread=DEBUG, kafkaAppender
#log4j.logger.org.I0Itec.zkclient.ZkClient=DEBUG
log4j.logger.kafka=INFO, kafkaAppender

log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
log4j.additivity.kafka.network.RequestChannel$=false

#log4j.logger.kafka.network.Processor=TRACE, requestAppender
#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender
#log4j.additivity.kafka.server.KafkaApis=false
log4j.logger.kafka.request.logger=WARN, requestAppender
log4j.additivity.kafka.request.logger=false

log4j.logger.kafka.controller=INFO, controllerAppender
log4j.additivity.kafka.controller=false

log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
log4j.additivity.kafka.log.LogCleaner=false

log4j.logger.state.change.logger=INFO, stateChangeAppender
log4j.additivity.state.change.logger=false

查看类路径中的 JAR,我得出结论,我们的 Kafka 安装(加上一些可能引入依赖项的自定义代码)实际上是通过 logback 登录的。在类路径中找到这些 JAR:

logback-classic-1.0.11.jar
logback-core-1.0.11.jar

所以我在类路径中删除了一个 logback.xml 而不是修改可能被忽略的 log4j.properties

<configuration>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <root level="info">
        <appender-ref ref="STDOUT" />
    </root>
</configuration>

结果是日志记录已减少到 logback.xml 中指定的级别。