Logback:以 64k 的大小优雅地拆分消息
Logback: gracefully split messages at 64k
我们有一个日志收集服务,可以自动拆分64KB的消息,但是拆分一点也不优雅。我们将单独的日志消息打印为 json 带有一些额外元数据的 blob。有时这些包括我们想要完整保留的大堆栈跟踪。
所以我正在研究编写一个自定义记录器或附加程序包装器,它将接收消息并将其拆分为更小的块并重新记录它,但这看起来很重要。
如果消息的大小大于某个值,是否有一种简单的方法来配置 logback 将其消息拆分为多个单独的消息?
这里是 appender 配置:
<!-- Sumo optimized rolling log file -->
<appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<Append>true</Append>
<file>${log.dir}/${service.name}-sumo.log</file>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<fieldName>t</fieldName>
<pattern>yyyy-MM-dd'T'HH:mm:ss.SSS'Z'</pattern>
<timeZone>UTC</timeZone>
</timestamp>
<message/>
<loggerName/>
<threadName/>
<logLevel/>
<stackTrace>
<if condition='isDefined("throwable.converter")'>
<then>
<throwableConverter class="${throwable.converter}"/>
</then>
</if>
</stackTrace>
<mdc/>
<tags/>
</providers>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<maxIndex>1</maxIndex>
<FileNamePattern>${log.dir}/${service.name}-sumo.log.%i</FileNamePattern>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>256MB</MaxFileSize>
</triggeringPolicy>
</appender>
<appender name="sumo" class="ch.qos.logback.classic.AsyncAppender">
<queueSize>500</queueSize>
<discardingThreshold>0</discardingThreshold>
<appender-ref ref="file" />
</appender>
我想出的解决方案就是将我的记录器包装在可以很好地拆分消息的东西中。请注意,我主要对使用 Throwable
拆分消息感兴趣,因为这些是导致长消息的原因。
用 lambdas 编写 Java 8
另请注意,此代码未经过全面测试,如果发现任何错误,我会进行更新。
public class MessageSplittingLogger extends MarkerIgnoringBase {
//Target size is 64k for split. UTF-8 nominally has 1 byte characters, but some characters will use > 1 byte so leave some wiggle room
//Also leave room for additional messages
private static final int MAX_CHARS_BEFORE_SPLIT = 56000;
private static final String ENCODING = "UTF-8";
private Logger LOGGER;
public MessageSplittingLogger(Class<?> clazz) {
this.LOGGER = LoggerFactory.getLogger(clazz);
}
private void splitMessageAndLog(String msg, Throwable t, Consumer<String> logLambda) {
String combinedMsg = msg + (t != null ? "\nStack Trace:\n" + printStackTraceToString(t) : "");
int totalMessages = combinedMsg.length() / MAX_CHARS_BEFORE_SPLIT;
if(combinedMsg.length() % MAX_CHARS_BEFORE_SPLIT > 0){
totalMessages++;
}
int index = 0;
int msgNumber = 1;
while (index < combinedMsg.length()) {
String messageNumber = totalMessages > 1 ? "(" + msgNumber++ + " of " + totalMessages + ")\n" : "";
logLambda.accept(messageNumber + combinedMsg.substring(index, Math.min(index + MAX_CHARS_BEFORE_SPLIT, combinedMsg.length())));
index += MAX_CHARS_BEFORE_SPLIT;
}
}
/**
* Get the stack trace as a String
*/
private String printStackTraceToString(Throwable t) {
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PrintStream ps = new PrintStream(baos, true, ENCODING);
t.printStackTrace(ps);
return baos.toString(ENCODING);
} catch (UnsupportedEncodingException e) {
return "Exception printing stack trace: " + e.getMessage();
}
}
@Override
public String getName() {
return LOGGER.getName();
}
@Override
public boolean isTraceEnabled() {
return LOGGER.isTraceEnabled();
}
@Override
public void trace(String msg) {
LOGGER.trace(msg);
}
@Override
public void trace(String format, Object arg) {
LOGGER.trace(format, arg);
}
@Override
public void trace(String format, Object arg1, Object arg2) {
LOGGER.trace(format, arg1, arg2);
}
@Override
public void trace(String format, Object... arguments) {
LOGGER.trace(format, arguments);
}
@Override
public void trace(String msg, Throwable t) {
splitMessageAndLog(msg, t, LOGGER::trace);
}
//... Similarly wrap calls to debug/info/error
}
我们有一个日志收集服务,可以自动拆分64KB的消息,但是拆分一点也不优雅。我们将单独的日志消息打印为 json 带有一些额外元数据的 blob。有时这些包括我们想要完整保留的大堆栈跟踪。
所以我正在研究编写一个自定义记录器或附加程序包装器,它将接收消息并将其拆分为更小的块并重新记录它,但这看起来很重要。
如果消息的大小大于某个值,是否有一种简单的方法来配置 logback 将其消息拆分为多个单独的消息?
这里是 appender 配置:
<!-- Sumo optimized rolling log file -->
<appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<Append>true</Append>
<file>${log.dir}/${service.name}-sumo.log</file>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<fieldName>t</fieldName>
<pattern>yyyy-MM-dd'T'HH:mm:ss.SSS'Z'</pattern>
<timeZone>UTC</timeZone>
</timestamp>
<message/>
<loggerName/>
<threadName/>
<logLevel/>
<stackTrace>
<if condition='isDefined("throwable.converter")'>
<then>
<throwableConverter class="${throwable.converter}"/>
</then>
</if>
</stackTrace>
<mdc/>
<tags/>
</providers>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<maxIndex>1</maxIndex>
<FileNamePattern>${log.dir}/${service.name}-sumo.log.%i</FileNamePattern>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>256MB</MaxFileSize>
</triggeringPolicy>
</appender>
<appender name="sumo" class="ch.qos.logback.classic.AsyncAppender">
<queueSize>500</queueSize>
<discardingThreshold>0</discardingThreshold>
<appender-ref ref="file" />
</appender>
我想出的解决方案就是将我的记录器包装在可以很好地拆分消息的东西中。请注意,我主要对使用 Throwable
拆分消息感兴趣,因为这些是导致长消息的原因。
用 lambdas 编写 Java 8
另请注意,此代码未经过全面测试,如果发现任何错误,我会进行更新。
public class MessageSplittingLogger extends MarkerIgnoringBase {
//Target size is 64k for split. UTF-8 nominally has 1 byte characters, but some characters will use > 1 byte so leave some wiggle room
//Also leave room for additional messages
private static final int MAX_CHARS_BEFORE_SPLIT = 56000;
private static final String ENCODING = "UTF-8";
private Logger LOGGER;
public MessageSplittingLogger(Class<?> clazz) {
this.LOGGER = LoggerFactory.getLogger(clazz);
}
private void splitMessageAndLog(String msg, Throwable t, Consumer<String> logLambda) {
String combinedMsg = msg + (t != null ? "\nStack Trace:\n" + printStackTraceToString(t) : "");
int totalMessages = combinedMsg.length() / MAX_CHARS_BEFORE_SPLIT;
if(combinedMsg.length() % MAX_CHARS_BEFORE_SPLIT > 0){
totalMessages++;
}
int index = 0;
int msgNumber = 1;
while (index < combinedMsg.length()) {
String messageNumber = totalMessages > 1 ? "(" + msgNumber++ + " of " + totalMessages + ")\n" : "";
logLambda.accept(messageNumber + combinedMsg.substring(index, Math.min(index + MAX_CHARS_BEFORE_SPLIT, combinedMsg.length())));
index += MAX_CHARS_BEFORE_SPLIT;
}
}
/**
* Get the stack trace as a String
*/
private String printStackTraceToString(Throwable t) {
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PrintStream ps = new PrintStream(baos, true, ENCODING);
t.printStackTrace(ps);
return baos.toString(ENCODING);
} catch (UnsupportedEncodingException e) {
return "Exception printing stack trace: " + e.getMessage();
}
}
@Override
public String getName() {
return LOGGER.getName();
}
@Override
public boolean isTraceEnabled() {
return LOGGER.isTraceEnabled();
}
@Override
public void trace(String msg) {
LOGGER.trace(msg);
}
@Override
public void trace(String format, Object arg) {
LOGGER.trace(format, arg);
}
@Override
public void trace(String format, Object arg1, Object arg2) {
LOGGER.trace(format, arg1, arg2);
}
@Override
public void trace(String format, Object... arguments) {
LOGGER.trace(format, arguments);
}
@Override
public void trace(String msg, Throwable t) {
splitMessageAndLog(msg, t, LOGGER::trace);
}
//... Similarly wrap calls to debug/info/error
}