不同 Karaf Bundle 的不同日志级别
Different log level for different Karaf Bundles
在 Karaf 3.0.5
运行 下 Servicemix 6.1.0
我的 org.ops4j.pax.logging.cfg
如下所示:-
# Root logger
log4j.rootLogger=DEBUG, async, osgi:*
log4j.throwableRenderer=org.apache.log4j.OsgiThrowableRenderer
# To avoid flooding the log when using DEBUG level on an ssh connection and doing log:tail
log4j.logger.org.apache.sshd.server.channel.ChannelSession = INFO
# CONSOLE appender not used by default
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %X | %m%n
# File appender
log4j.appender.out=org.apache.log4j.RollingFileAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %X | %m%n
log4j.appender.out.file=/tmp/servicemix.log
log4j.appender.out.append=true
log4j.appender.out.maxFileSize=1024MB
log4j.appender.out.maxBackupIndex=10
# Sift appender
log4j.appender.sift=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.sift.key=bundle.name
log4j.appender.sift.default=servicemix
log4j.appender.sift.appender=org.apache.log4j.FileAppender
log4j.appender.sift.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.sift.appender.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X | %m%n
log4j.appender.sift.appender.file=/tmp/$\{bundle.name\}.log
log4j.appender.sift.appender.append=true
# Async appender
log4j.appender.async=org.apache.log4j.AsyncAppender
log4j.appender.async.appenders=out
现在这个记录器配置也正在转储 Karaf 的调试日志,而我的意图是 DEBUG
仅转储特定的包。
有人可以告诉我是否可以为不同的包设置不同的日志级别吗?我想问的是假设我有两个包 bundle1
和 bundle2
。我希望 bundle1
日志级别为 DEBUG
,bundle2
日志级别为 ERROR
。这样可以吗?
正如 karaf mailing-list 中所讨论的,因为我对不同的包使用不同的记录器。因此,只需在 org.ops4j.pax.logging.cfg
中为捆绑级别记录器设置日志级别(如下所示)就为我解决了这个问题。
log4j.logger.com.j1.bundle1 = DEBUG
上述线程中提出了另一种解决方案。我没有尝试,只是在下面引用它:-
You can do this by using MDC combined with filters (I implemented that
in Pax logging a few years back).
E g if you use this root logger:
log4j.rootLogger=INFO, stdout, info, error, bundle, context, osgi:*
And you define the "bundle" log as follows:
log4j.appender.bundle=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.bundle.key=bundle.name
log4j.appender.bundle.default=karaf
log4j.appender.bundle.appender=org.apache.log4j.RollingFileAppender
log4j.appender.bundle.appender.MaxFileSize=1MB
log4j.appender.bundle.appender.MaxBackupIndex=2
log4j.appender.bundle.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.bundle.appender.layout.ConversionPattern=%d{ISO8601} |
%-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
log4j.appender.bundle.appender.file=${logdir}/bundles/$\{bundle.name\}.log
log4j.appender.bundle.appender.append=true
log4j.appender.bundle.threshold=INFO
You will end up with a separate log file per bundle (named with the
bundle's name). I use a custom variable (${logdir}) to specify where
to create the log file but you can do as you wish. In this case these
log files will be at INFO level.
Sometimes I want TRACE logging on a specific bundle. I can then do as
follows:
log4j.rootLogger=TRACE, stdout, info, error, bundle, context, osgi:*,
bundle_trace
log4j.appender.bundle_trace=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.bundle_trace.key=bundle.name
log4j.appender.bundle_trace.default=karaf
log4j.appender.bundle_trace.appender=org.apache.log4j.RollingFileAppender
log4j.appender.bundle_trace.appender.MaxFileSize=10MB
log4j.appender.bundle_trace.appender.MaxBackupIndex=2
log4j.appender.bundle_trace.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.bundle_trace.appender.layout.ConversionPattern=%d{ISO8601}
| %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
log4j.appender.bundle_trace.appender.file=${logdir}/bundles/trace/$\{bundle.name\}.log
log4j.appender.bundle_trace.appender.append=true
log4j.appender.bundle_trace.threshold=TRACE
log4j.appender.bundle_trace.filter.a=org.apache.log4j.filter.MDCMatchFilter
log4j.appender.bundle_trace.filter.a.exactMatch=false
log4j.appender.bundle_trace.filter.a.keyToMatch=bundle.name
log4j.appender.bundle_trace.filter.a.valueToMatch=org.apache.aries.blueprint.core
log4j.appender.bundle_trace.filter.z=org.apache.log4j.varia.DenyAllFilter
In the above example I create a separate TRACE log for the bundle with
the name "org.apache.aries.blueprint.core".
It is also possible to configure custom logging for a particular camel
context which we do in our integration platform based on Karaf and
Camel.
Also one thing to note DenyAllFilter should be the last filer in the above config.
在 Karaf 3.0.5
运行 下 Servicemix 6.1.0
我的 org.ops4j.pax.logging.cfg
如下所示:-
# Root logger
log4j.rootLogger=DEBUG, async, osgi:*
log4j.throwableRenderer=org.apache.log4j.OsgiThrowableRenderer
# To avoid flooding the log when using DEBUG level on an ssh connection and doing log:tail
log4j.logger.org.apache.sshd.server.channel.ChannelSession = INFO
# CONSOLE appender not used by default
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %X | %m%n
# File appender
log4j.appender.out=org.apache.log4j.RollingFileAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %X | %m%n
log4j.appender.out.file=/tmp/servicemix.log
log4j.appender.out.append=true
log4j.appender.out.maxFileSize=1024MB
log4j.appender.out.maxBackupIndex=10
# Sift appender
log4j.appender.sift=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.sift.key=bundle.name
log4j.appender.sift.default=servicemix
log4j.appender.sift.appender=org.apache.log4j.FileAppender
log4j.appender.sift.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.sift.appender.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X | %m%n
log4j.appender.sift.appender.file=/tmp/$\{bundle.name\}.log
log4j.appender.sift.appender.append=true
# Async appender
log4j.appender.async=org.apache.log4j.AsyncAppender
log4j.appender.async.appenders=out
现在这个记录器配置也正在转储 Karaf 的调试日志,而我的意图是 DEBUG
仅转储特定的包。
有人可以告诉我是否可以为不同的包设置不同的日志级别吗?我想问的是假设我有两个包 bundle1
和 bundle2
。我希望 bundle1
日志级别为 DEBUG
,bundle2
日志级别为 ERROR
。这样可以吗?
正如 karaf mailing-list 中所讨论的,因为我对不同的包使用不同的记录器。因此,只需在 org.ops4j.pax.logging.cfg
中为捆绑级别记录器设置日志级别(如下所示)就为我解决了这个问题。
log4j.logger.com.j1.bundle1 = DEBUG
上述线程中提出了另一种解决方案。我没有尝试,只是在下面引用它:-
You can do this by using MDC combined with filters (I implemented that in Pax logging a few years back).
E g if you use this root logger:
log4j.rootLogger=INFO, stdout, info, error, bundle, context, osgi:*
And you define the "bundle" log as follows:
log4j.appender.bundle=org.apache.log4j.sift.MDCSiftingAppender log4j.appender.bundle.key=bundle.name log4j.appender.bundle.default=karaf log4j.appender.bundle.appender=org.apache.log4j.RollingFileAppender log4j.appender.bundle.appender.MaxFileSize=1MB log4j.appender.bundle.appender.MaxBackupIndex=2 log4j.appender.bundle.appender.layout=org.apache.log4j.PatternLayout log4j.appender.bundle.appender.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n log4j.appender.bundle.appender.file=${logdir}/bundles/$\{bundle.name\}.log log4j.appender.bundle.appender.append=true log4j.appender.bundle.threshold=INFO
You will end up with a separate log file per bundle (named with the bundle's name). I use a custom variable (${logdir}) to specify where to create the log file but you can do as you wish. In this case these log files will be at INFO level.
Sometimes I want TRACE logging on a specific bundle. I can then do as follows:
log4j.rootLogger=TRACE, stdout, info, error, bundle, context, osgi:*, bundle_trace
log4j.appender.bundle_trace=org.apache.log4j.sift.MDCSiftingAppender log4j.appender.bundle_trace.key=bundle.name log4j.appender.bundle_trace.default=karaf log4j.appender.bundle_trace.appender=org.apache.log4j.RollingFileAppender log4j.appender.bundle_trace.appender.MaxFileSize=10MB log4j.appender.bundle_trace.appender.MaxBackupIndex=2 log4j.appender.bundle_trace.appender.layout=org.apache.log4j.PatternLayout log4j.appender.bundle_trace.appender.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n log4j.appender.bundle_trace.appender.file=${logdir}/bundles/trace/$\{bundle.name\}.log log4j.appender.bundle_trace.appender.append=true log4j.appender.bundle_trace.threshold=TRACE log4j.appender.bundle_trace.filter.a=org.apache.log4j.filter.MDCMatchFilter log4j.appender.bundle_trace.filter.a.exactMatch=false log4j.appender.bundle_trace.filter.a.keyToMatch=bundle.name log4j.appender.bundle_trace.filter.a.valueToMatch=org.apache.aries.blueprint.core log4j.appender.bundle_trace.filter.z=org.apache.log4j.varia.DenyAllFilter
In the above example I create a separate TRACE log for the bundle with the name "org.apache.aries.blueprint.core".
It is also possible to configure custom logging for a particular camel context which we do in our integration platform based on Karaf and Camel. Also one thing to note DenyAllFilter should be the last filer in the above config.