带有 Logback 映射诊断上下文的类型化 JSON 输出

问题描述 投票:0回答:1

我们的应用程序使用SLF4J的MDC和Logback的JSON编码器将日志行写入JSON。然后,这些行由日志传送管道处理,并作为文档写入 ElasticSearch。他们有

logback.xml
文件,如下所示:

<configuration>
  <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <encoder class="net.logstash.logback.encoder.LogstashEncoder">
      <provider class="net.logstash.logback.composite.loggingevent.ArgumentsJsonProvider"/>
    </encoder>
  </appender>

  <root level="INFO">
    <appender-ref ref="STDOUT" />
  </root>
</configuration>

MDC的使用方式如下:

MDC.put("stringVal", "val");
MDC.put("decimalVal", "1.23");
MDC.put("longVal", "123498559348792879267942876");
logger.info("foo");

问题在于 MDC 的接口是

void put(String key, String val)
,因此所有值都必须是字符串或可自动装箱为字符串。然后,这会产生以下日志:

{"@timestamp":"2021-08-23T16:32:04.231+01:00","@version":1,"message":"foo","logger_name":"MDCTest","thread_name":"main","level":"INFO","level_value":20000,"stringVal":"val","decimalVal":"1.23","longVal":"123498559348792879267942876"}

输入的字符串

decimalVal
longVal
然后会使用自动类型映射在 Elasticsearch 中被拾取为字符串类型,然后我们就无法对它们执行数字运算。

在这种情况下,这些数值主要来自滥用日志来发送指标,但这种合并正在其他地方处理。

我不想强迫开发人员在添加更多日志时必须更新 Elasticsearch 索引模板或编写日志传送管道的配置,因此我一直在寻找一种自动执行此操作的方法。我制作了一个实现,通过替换 Logback

Encoder
Formatter
的内部结构来切换
MdcJsonProvider
(负责将 MDC 序列化为 JSON 的类)。这感觉非常脆弱且性能不佳。

是否有更优雅的方法来做到这一点,或者有不同的方法来获得相同的效果?我已经看过

ch.qos.logback.contrib.jackson.JacksonJsonFormatter
但我仍然需要在 logback 文件中列出数字 MDC 属性,这是我试图避免的。

import java.io.IOException;
import java.math.BigDecimal;
import java.math.BigInteger;
import java.util.HashMap;
import java.util.Map;
import java.util.Optional;

import com.fasterxml.jackson.core.JsonGenerator;
import com.google.common.collect.Maps;

import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.spi.ContextAware;
import net.logstash.logback.LogstashFormatter;
import net.logstash.logback.composite.CompositeJsonFormatter;
import net.logstash.logback.composite.JsonProvider;
import net.logstash.logback.composite.JsonWritingUtils;
import net.logstash.logback.composite.loggingevent.MdcJsonProvider;
import net.logstash.logback.encoder.LogstashEncoder;

/**
 * This class is a hack.
 *
 * It exists because Logback's MDC interface is a map from String to String, and this cannot
 * be changed. Instead, we must modify the logger itself to check if any of the String values
 * appear numeric and then log them as such.
 *
 * This is necessary since we ship the documents directly to ElasticSearch. As such, be warned
 * that you may have mapping conflict issues if the type for any given key fluctuates or if you
 * occasionally write strings that are fully numeric.
 *
 * This class implements an Encoder, a Formatter, and a JsonProvider that work together. The
 * encoder specifies the formatter, and the formatter works by swapping out the MdcJsonProvider
 * implementation normally used to serialise the MDC values with another implementation with tries
 * to log them as numbers.
 *
 * Using this class has a cost. It will result in more short-lived object creation, so it is not
 * suitable for high frequency logging.
 */
public class JsonMdcTypePreservingEncoder extends LogstashEncoder {
    protected CompositeJsonFormatter<ILoggingEvent> createFormatter() {
        return new JsonMdcTypePreservingFormatter(this);
    }

    protected JsonMdcTypePreservingFormatter getFormatter() {
        return (JsonMdcTypePreservingFormatter) super.getFormatter();
    }

    /**
     * This class exists to remove the default MdcJsonProvider, responsible for logging the MDC
     * section as JSON, and swap it out for the custom one.
     */
    public static class JsonMdcTypePreservingFormatter extends LogstashFormatter {
        public JsonMdcTypePreservingFormatter(ContextAware declaredOrigin) {
            super(declaredOrigin);

            Optional<JsonProvider<ILoggingEvent>> oldProvider = getProviders().getProviders()
                    .stream()
                    .filter(o -> o.getClass() == MdcJsonProvider.class)
                    .findFirst();

            if (oldProvider.isPresent()) {
                getProviders().removeProvider(oldProvider.get());
                getProviders().addProvider(new TypePreservingMdcJsonProvider());
            }
        }

        /**
         * This class contains a duplicate of MdcJsonProvider.writeTo but with a small change.
         * Instead of taking the MDC Map<String, String> and logging it, it produces a modified
         * Map<String, Object> that potentially contains BigDecimal or BigInteger types alongside
         * Strings and serialises those instead.
         *
         * The only new code in this method is the call to getStringObjectMap.
         */
        public static class TypePreservingMdcJsonProvider extends MdcJsonProvider {
            private Map<String, Object> convertedProperties = Maps.newHashMap();

            @Override
            public void writeTo(JsonGenerator generator, ILoggingEvent event) throws IOException {
                Map<String, String> mdcProperties = event.getMDCPropertyMap();

                if (mdcProperties != null && !mdcProperties.isEmpty()) {
                    if (getFieldName() != null) {
                        generator.writeObjectFieldStart(getFieldName());
                    }

                    if (!getIncludeMdcKeyNames().isEmpty()) {
                        mdcProperties = new HashMap(mdcProperties);
                        ((Map) mdcProperties).keySet().retainAll(getIncludeMdcKeyNames());
                    }

                    if (!getExcludeMdcKeyNames().isEmpty()) {
                        mdcProperties = new HashMap(mdcProperties);
                        ((Map) mdcProperties).keySet().removeAll(getExcludeMdcKeyNames());
                    }

                    Map<String, Object> convertedProperties = getStringObjectMap(mdcProperties);

                    JsonWritingUtils.writeMapEntries(generator, convertedProperties);
                    if (getFieldName() != null) {
                        generator.writeEndObject();
                    }
                }
            }

            private Map<String, Object> getStringObjectMap(Map<String, String> mdcProperties) {
                convertedProperties.clear();

                for (String key : mdcProperties.keySet()) {
                    String value = mdcProperties.get(key);

                    // If the majority of MDC values are not numbers, this prevents unnecessary
                    // parsing steps but adds a character-wise inspection per value.
                    try {
                        BigInteger parsed = new BigInteger(value);
                        convertedProperties.put(key, parsed);
                    } catch (NumberFormatException e) {
                        try {
                            BigDecimal parsed = new BigDecimal(vlue);
                            convertedProperties.put(key, parsed);
                        } catch (NumberFormatException f) {
                            // No-op
                        }
                    }

                    if (!convertedProperties.containsKey(key)) {
                        convertedProperties.put(key, value);
                    }
                }
                return convertedProperties;
            }
        }
    }
}

感谢您的建议!

java json logback slf4j logstash-logback-encoder
1个回答
0
投票

在以下博客文章此处描述了如何制定解决方法来支持 JSON LogstashEncoder 中的 MDC 值。

所以,你有下一个 Logback 配置:

<!-- logback-access.xml --> 
<configuration>
    <appender name="ACCESS_STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashAccessEncoder">
            ... config ...
        </encoder>
    </appender>
    <appender-ref ref="ACCESS_STDOUT"/>
</configuration>

如果您需要更多自定义,则可以使用 AccessEventCompositeJsonEncoder,在其中使用 Logback 布局的 conversion specifiers 指定 JSON 模板。

<configuration>
<appender name="ACCESS_STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <encoder class="net.logstash.logback.encoder.AccessEventCompositeJsonEncoder">
        <providers>
            <timestamp />
            <pattern>
                <pattern>
                  {
                    "yourField": {
                        "yourNestedField": "%reqAttribute{abc}"
                    },
                    "http": {
                        "path" : "%requestURI"
                    }
                    "@version" : "2023-01-03",
                    "@type" : "access"
                  }
                </pattern>
            </pattern>
        </providers>
    </encoder>
</appender>
<appender-ref ref="ACCESS_STDOUT"/>

不幸的是,当前版本不支持 MDC,但解决问题的一种方法是创建一个过滤器,将数据设置为除 MDC 之外的请求属性。请参阅下面的示例,添加由上面的配置读取的“abc”请求属性。

import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

@Component
public class MDCXRequestIdLoggingFilter implements Filter {
    public void doFilter(ServletRequest request,
                         ServletResponse response,
                         FilterChain filterChain) throws IOException, ServletException {
        if (request instanceof HttpServletRequest httpServletRequest
            && response instanceof HttpServletResponse httpServletResponse) {
            
            var abcValue = "SET ME";
           
            MDC.put("abc", abcValue);
            httpServletRequest.setAttribute("abc", abcValue);
        }

        filterChain.doFilter(request, response);
    }
}

如果代码设置“第一个值”,然后设置“第二个值”,则这将是访问日志中的输出

{


"yourField": {
    "yourNestedField": "first value"
  },
  "http": {
    "path" : "/"
  },
  "@version" : "2023-01-03",
  "@type" : "access"
}
{
  "yourField": {
    "yourNestedField": "second value"
  },
  "http": {
    "path" : "/"
  },
  "@version" : "2023-01-03",
  "@type" : "access"
}
© www.soinside.com 2019 - 2024. All rights reserved.