blob: a8b1f79dd1db755570bfd94d0a5921a4f80dd2e3 [file] [log] [blame]
<!DOCTYPE html>
<!--
| Generated by Apache Maven Doxia Site Renderer 1.11.1 from target/generated-sources/site/markdown/manual/cloud.md at 2024-03-06
| Rendered using Apache Maven Fluido Skin 1.11.2
-->
<html xmlns="http://www.w3.org/1999/xhtml" lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="generator" content="Apache Maven Doxia Site Renderer 1.11.1" />
<title>Log4j &#x2013; Using Log4j in Cloud Enabled Applications</title>
<link rel="stylesheet" href="../css/apache-maven-fluido-1.11.2.min.css" />
<link rel="stylesheet" href="../css/site.css" />
<link rel="stylesheet" href="../css/print.css" media="print" />
<script src="../js/apache-maven-fluido-1.11.2.min.js"></script>
</head>
<body class="topBarDisabled">
<div class="container-fluid">
<header>
<div id="banner">
<div class="pull-left"><a href="../../.." id="bannerLeft"><img src="../images/ls-logo.jpg" alt="" style="" /></a></div>
<div class="pull-right"><a href=".././" id="bannerRight"><img src="../images/logo.png" alt="" style="" /></a></div>
<div class="clear"><hr/></div>
</div>
<div id="breadcrumbs">
<ul class="breadcrumb">
<li id="publishDate">Last Published: 2024-03-06<span class="divider">|</span>
</li>
<li id="projectVersion">Version: 2.23.1</li>
<li class="pull-right"><span class="divider">|</span>
<a href="https://github.com/apache/logging-log4j2" class="externalLink" title="GitHub">GitHub</a></li>
<li class="pull-right"><span class="divider">|</span>
<a href="../../../" title="Logging Services">Logging Services</a></li>
<li class="pull-right"><span class="divider">|</span>
<a href="https://www.apache.org/" class="externalLink" title="Apache">Apache</a></li>
<li class="pull-right"><a href="https://cwiki.apache.org/confluence/display/LOGGING/Log4j" class="externalLink" title="Logging Wiki">Logging Wiki</a></li>
</ul>
</div>
</header>
<div class="row-fluid">
<header id="leftColumn" class="span2">
<nav class="well sidebar-nav">
<ul class="nav nav-list">
<li class="nav-header"><img class="imageLink" src="../img/glyphicons/home.png" alt="Apache Log4j™ 2" style="border: 0;" /> Apache Log4j™ 2</li>
<li><a href="../index.html" title="About"><span class="none"></span>About</a></li>
<li><a href="../download.html" title="Download"><span class="none"></span>Download</a></li>
<li><a href="../support.html" title="Support"><span class="none"></span>Support</a></li>
<li><a href="../maven-artifacts.html" title="Maven, Ivy, Gradle Artifacts"><span class="icon-chevron-right"></span>Maven, Ivy, Gradle Artifacts</a></li>
<li><a href="../release-notes.html" title="Release Notes"><span class="none"></span>Release Notes</a></li>
<li><a href="../faq.html" title="FAQ"><span class="none"></span>FAQ</a></li>
<li><a href="../performance.html" title="Performance"><span class="icon-chevron-right"></span>Performance</a></li>
<li><a href="../articles.html" title="Articles and Tutorials"><span class="none"></span>Articles and Tutorials</a></li>
<li><a href="../security.html" title="Security"><span class="icon-chevron-right"></span>Security</a></li>
<li class="nav-header"><img class="imageLink" src="../img/glyphicons/book.png" alt="Manual" style="border: 0;" /> Manual</li>
<li><a href="../manual/index.html" title="Introduction"><span class="none"></span>Introduction</a></li>
<li><a href="../manual/architecture.html" title="Architecture"><span class="none"></span>Architecture</a></li>
<li><a href="../manual/api-separation.html" title="API Separation"><span class="none"></span>API Separation</a></li>
<li><a href="../manual/migration.html" title="Log4j 1.x Migration"><span class="icon-chevron-right"></span>Log4j 1.x Migration</a></li>
<li><a href="../manual/api.html" title="Java API"><span class="icon-chevron-right"></span>Java API</a></li>
<li><a href="../../kotlin" title="Kotlin API"><span class="none"></span>Kotlin API</a></li>
<li><a href="../../scala" title="Scala API"><span class="none"></span>Scala API</a></li>
<li><a href="../manual/configuration.html" title="Configuration"><span class="icon-chevron-right"></span>Configuration</a></li>
<li><a href="../manual/usage.html" title="Usage"><span class="icon-chevron-down"></span>Usage</a>
<ul class="nav nav-list">
<li><a href="../manual/usage.html#StaticVsNonStatic" title="Static vs non-Static Loggers"><span class="none"></span>Static vs non-Static Loggers</a></li>
<li><a href="../manual/usage.html#LoggerVsClass" title="Logger Name vs Class Name"><span class="none"></span>Logger Name vs Class Name</a></li>
<li class="active"><a><span class="none"></span>Logging in the Cloud</a></li>
</ul></li>
<li><a href="../manual/webapp.html" title="Web Applications and JSPs"><span class="icon-chevron-right"></span>Web Applications and JSPs</a></li>
<li><a href="../manual/lookups.html" title="Lookups"><span class="icon-chevron-right"></span>Lookups</a></li>
<li><a href="../manual/appenders.html" title="Appenders"><span class="icon-chevron-right"></span>Appenders</a></li>
<li><a href="../manual/layouts.html" title="Layouts"><span class="icon-chevron-right"></span>Layouts</a></li>
<li><a href="../manual/filters.html" title="Filters"><span class="icon-chevron-right"></span>Filters</a></li>
<li><a href="../manual/async.html" title="Async Loggers"><span class="icon-chevron-right"></span>Async Loggers</a></li>
<li><a href="../manual/garbagefree.html" title="Garbage-free Logging"><span class="icon-chevron-right"></span>Garbage-free Logging</a></li>
<li><a href="../manual/jmx.html" title="JMX"><span class="none"></span>JMX</a></li>
<li><a href="../manual/logsep.html" title="Logging Separation"><span class="none"></span>Logging Separation</a></li>
<li><a href="../manual/extending.html" title="Extending Log4j"><span class="icon-chevron-right"></span>Extending Log4j</a></li>
<li><a href="../manual/plugins.html" title="Plugins"><span class="icon-chevron-right"></span>Plugins</a></li>
<li><a href="../manual/customconfig.html" title="Programmatic Log4j Configuration"><span class="icon-chevron-right"></span>Programmatic Log4j Configuration</a></li>
<li><a href="../manual/customloglevels.html" title="Custom Log Levels"><span class="icon-chevron-right"></span>Custom Log Levels</a></li>
<li class="nav-header"><img class="imageLink" src="../img/glyphicons/pencil.png" alt="For Contributors" style="border: 0;" /> For Contributors</li>
<li><a href="../guidelines.html" title="Guidelines"><span class="none"></span>Guidelines</a></li>
<li><a href="../javastyle.html" title="Style Guide"><span class="none"></span>Style Guide</a></li>
<li class="nav-header"><img class="imageLink" src="../img/glyphicons/cog.png" alt="Components" style="border: 0;" /> Components</li>
<li><a href="../log4j-api.html" title="API"><span class="none"></span>API</a></li>
<li><a href="../log4j-jcl.html" title="Commons Logging Bridge"><span class="none"></span>Commons Logging Bridge</a></li>
<li><a href="../log4j-1.2-api.html" title="Log4j 1.2 API"><span class="none"></span>Log4j 1.2 API</a></li>
<li><a href="../log4j-slf4j-impl.html" title="SLF4J Binding"><span class="none"></span>SLF4J Binding</a></li>
<li><a href="../log4j-jul.html" title="JUL Adapter"><span class="none"></span>JUL Adapter</a></li>
<li><a href="../log4j-jpl.html" title="JDK Platform Logger"><span class="none"></span>JDK Platform Logger</a></li>
<li><a href="../log4j-to-slf4j.html" title="Log4j 2 to SLF4J Adapter"><span class="none"></span>Log4j 2 to SLF4J Adapter</a></li>
<li><a href="../log4j-flume-ng.html" title="Apache Flume Appender"><span class="none"></span>Apache Flume Appender</a></li>
<li><a href="../log4j-taglib.html" title="Log4j Tag Library"><span class="none"></span>Log4j Tag Library</a></li>
<li><a href="../log4j-jmx-gui.html" title="Log4j JMX GUI"><span class="none"></span>Log4j JMX GUI</a></li>
<li><a href="../log4j-web.html" title="Log4j Web Application Support"><span class="none"></span>Log4j Web Application Support</a></li>
<li><a href="../log4j-jakarta-web.html" title="Log4j Jakarta Web Application Support"><span class="none"></span>Log4j Jakarta Web Application Support</a></li>
<li><a href="../log4j-appserver.html" title="Log4j Application Server Integration"><span class="none"></span>Log4j Application Server Integration</a></li>
<li><a href="../log4j-couchdb.html" title="Log4j CouchDB appender"><span class="none"></span>Log4j CouchDB appender</a></li>
<li><a href="../log4j-mongodb3.html" title="Log4j MongoDB3 appender"><span class="none"></span>Log4j MongoDB3 appender</a></li>
<li><a href="../log4j-mongodb4.html" title="Log4j MongoDB4 appender"><span class="none"></span>Log4j MongoDB4 appender</a></li>
<li><a href="../log4j-cassandra.html" title="Log4j Cassandra appender"><span class="none"></span>Log4j Cassandra appender</a></li>
<li><a href="../log4j-iostreams.html" title="Log4j IO Streams"><span class="none"></span>Log4j IO Streams</a></li>
<li><a href="../log4j-docker.html" title="Log4j Docker Support"><span class="none"></span>Log4j Docker Support</a></li>
<li><a href="../log4j-kubernetes.html" title="Log4j Kubernetes Support"><span class="none"></span>Log4j Kubernetes Support</a></li>
<li><a href="../log4j-spring-boot.html" title="Log4j Spring Boot"><span class="none"></span>Log4j Spring Boot</a></li>
<li><a href="../log4j-spring-cloud-config-client.html" title="Log4j Spring Cloud Config Client"><span class="none"></span>Log4j Spring Cloud Config Client</a></li>
<li class="nav-header"><img class="imageLink" src="../img/glyphicons/tag.png" alt="Related Projects" style="border: 0;" /> Related Projects</li>
<li><a href="../../../chainsaw/2.x/index.html" title="Chainsaw"><span class="none"></span>Chainsaw</a></li>
<li><a href="../../../log4cxx/latest_stable/index.html" title="Log4Cxx"><span class="none"></span>Log4Cxx</a></li>
<li><a href="../../../log4j-audit/latest/index.html" title="Log4j Audit"><span class="none"></span>Log4j Audit</a></li>
<li><a href="../../kotlin" title="Log4j Kotlin"><span class="none"></span>Log4j Kotlin</a></li>
<li><a href="../../scala" title="Log4j Scala"><span class="none"></span>Log4j Scala</a></li>
<li><a href="../../transform" title="Log4j Transform"><span class="none"></span>Log4j Transform</a></li>
<li><a href="../../../log4net/index.html" title="Log4Net"><span class="none"></span>Log4Net</a></li>
<li class="nav-header"><img class="imageLink" src="../img/glyphicons/link.png" alt="Legacy Sites" style="border: 0;" /> Legacy Sites</li>
<li><a href="../../log4j-2.12.4/" title="Log4j 2.12.4 - Java 7"><span class="none"></span>Log4j 2.12.4 - Java 7</a></li>
<li><a href="../../log4j-2.3.2/" title="Log4j 2.3.2 - Java 6"><span class="none"></span>Log4j 2.3.2 - Java 6</a></li>
<li><a href="../../1.2/" title="Log4j 1.2 - End of Life"><span class="none"></span>Log4j 1.2 - End of Life</a></li>
<li class="nav-header"><img class="imageLink" src="../img/glyphicons/info.png" alt="Project Information" style="border: 0;" /> Project Information</li>
<li><a href="../team.html" title="Project Team"><span class="none"></span>Project Team</a></li>
<li><a href="https://www.apache.org/licenses/LICENSE-2.0" class="externalLink" title="Project License"><span class="none"></span>Project License</a></li>
<li><a href="https://github.com/apache/logging-log4j2" class="externalLink" title="Source Repository"><span class="none"></span>Source Repository</a></li>
<li><a href="../runtime-dependencies.html" title="Runtime Dependencies"><span class="none"></span>Runtime Dependencies</a></li>
<li><a href="../javadoc.html" title="Javadoc"><span class="none"></span>Javadoc</a></li>
<li><a href="../thanks.html" title="Thanks"><span class="none"></span>Thanks</a></li>
</ul>
</nav>
<div class="well sidebar-nav">
<div id="poweredBy">
<div class="clear"></div>
<div class="clear"></div>
<div class="clear"></div>
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy"><img class="builtBy" alt="Built by Maven" src="../images/logos/maven-feather.png" /></a>
</div>
</div>
</header>
<main id="bodyColumn" class="span10" >
<!-- vim: set syn=markdown : -->
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<h1>Using Log4j in Cloud Enabled Applications</h1><section>
<h2><a name="The_Twelve-Factor_Application"></a>The Twelve-Factor Application</h2>
<p>The Logging Guidelines for <a class="externalLink" href="https://12factor.net/logs">The Twelve-Factor App</a> state that all logs should be routed
unbuffered to stdout. Since this is the least common denominator it is guaranteed to work for all applications. However,
as with any set of general guidelines, choosing the least common denominator approach comes at a cost. Some of the costs
in Java applications include:</p>
<ol style="list-style-type: decimal">
<li>Java stack traces are multi-line log messages. The standard docker log driver cannot handle these properly. See
<a class="externalLink" href="https://github.com/moby/moby/issues/22920">Docker Issue #22920</a> which was closed with the message &#x201c;Don't Care&#x201d;.
Solutions for this are to:
a. Use a docker log driver that does support multi-line log message,
b. Use a logging format that does not produce multi-line messages,
c. Log from Log4j directly to a logging forwarder or aggregator and bypass the docker logging driver.</li>
<li>When logging to stdout in Docker, log events pass through Java's standard output handling which is then directed
to the operating system so that the output can be piped into a file. The overhead of all this is measurably slower
than just writing directly to a file as can be seen in these benchmark results where logging
to stdout is 16-20 times slower over repeated runs than logging directly to the file. The results below were obtained by
running the <a class="externalLink" href="https://github.com/apache/logging-log4j2/blob/2.x/log4j-perf/src/main/java/org/apache/logging/log4j/perf/jmh/OutputBenchmark.java">Output Benchmark</a>
on a 2018 MacBook Pro with a 2.9GHz Intel Core i9 processor and a 1TB SSD. However, these results alone would not be
enough to argue against writing to the standard output stream as they only amount to about 14-25 microseconds
per logging call vs 1.5 microseconds when writing to the file.
<div class="source"><pre class="prettyprint"><code> Benchmark Mode Cnt Score Error Units
OutputBenchmark.console thrpt 20 39291.885 &#xb1; 3370.066 ops/s
OutputBenchmark.file thrpt 20 654584.309 &#xb1; 59399.092 ops/s
OutputBenchmark.redirect thrpt 20 70284.576 &#xb1; 7452.167 ops/s
</code></pre></div></li>
<li>When performing audit logging using a framework such as log4j-audit guaranteed delivery of the audit events
is required. Many of the options for writing the output, including writing to the standard output stream, do
not guarantee delivery. In these cases the event must be delivered to a &#x201c;forwarder&#x201d; that acknowledges receipt
only when it has placed the event in durable storage, such as what <a class="externalLink" href="https://flume.apache.org/">Apache Flume</a>
or <a class="externalLink" href="https://kafka.apache.org/">Apache Kafka</a> will do.</li>
</ol></section><section>
<h2><a name="Logging_Approaches"></a>Logging Approaches</h2>
<p>All the solutions discussed on this page are predicated with the idea that log files cannot permanently
reside on the file system and that all log events should be routed to one or more log analysis tools that will
be used for reporting and alerting. There are many ways to forward and collect events to be sent to the
log analysis tools.</p>
<p>Note that any approach that bypasses Docker's logging drivers requires Log4j's
<a href="lookups.html#DockerLookup">Docker Lookup</a> to allow Docker attributes to be injected into the log events.</p><section>
<h3><a name="Logging_to_the_Standard_Output_Stream"></a>Logging to the Standard Output Stream</h3>
<p>As discussed above, this is the recommended 12-Factor approach for applications running in a docker container.
The Log4j team does not recommend this approach for performance reasons.</p>
<p><img src="../images/DockerStdout.png" alt="Stdout" title="Application Logging to the Standard Output Stream" /></p></section><section>
<h3><a name="Logging_to_the_Standard_Output_Stream_with_the_Docker_Fluentd_Logging_Driver"></a>Logging to the Standard Output Stream with the Docker Fluentd Logging Driver</h3>
<p>Docker provides alternate <a class="externalLink" href="https://docs.docker.com/config/containers/logging/configure/">logging drivers</a>,
such as <a class="externalLink" href="https://docs.docker.com/config/containers/logging/gelf/">gelf</a> or
<a class="externalLink" href="https://docs.docker.com/config/containers/logging/fluentd/">fluentd</a>, that
can be used to redirect the standard output stream to a log forwarder or log aggregator.</p>
<p>When routing to a log forwarder it is expected that the forwarder will have the same lifetime as the
application. If the forwarder should fail the management tools would be expected to also terminate
other containers dependent on the forwarder.</p>
<p><img src="../images/DockerFluentd.png" alt="Docker Fluentbit" title="Logging via StdOut using the Docker Fluentd Logging Driver to Fluent-bit" /></p>
<p>As an alternative the logging drivers could be configured to route events directly to a logging aggregator.
This is generally not a good idea as the logging drivers only allow a single host and port to be configured.
The docker documentation isn't clear but infers that log events will be dropped when log events cannot be
delivered so this method should not be used if a highly available solution is required.</p>
<p><img src="../images/DockerFluentdAggregator.png" alt="Docker Fluentd" title="Logging via StdOut using the Docker Fluentd Logging Driver to Fluentd" /></p></section><section>
<h3><a name="Logging_to_a_File"></a>Logging to a File</h3>
<p>While this is not the recommended 12-Factor approach, it performs very well. However, it requires that the
application declares a volume where the log files will reside and then configures the log forwarder to tail
those files. Care must also be taken to automatically manage the disk space used for the logs, which Log4j
can perform via the &#x201c;Delete&#x201d; action on the <a href="appenders.html#RollingFileAppender">RollingFileAppender</a>.</p>
<p><img src="../images/DockerLogFile.png" alt="File" title="Logging to a File" /></p></section><section>
<h3><a name="Sending_Directly_to_a_Log_Forwarder_via_TCP"></a>Sending Directly to a Log Forwarder via TCP</h3>
<p>Sending logs directly to a Log Forwarder is simple as it generally just requires that the forwarder's
host and port be configured on a SocketAppender with an appropriate layout.</p>
<p><img src="../images/DockerTCP.png" alt="TCP" title="Application Logging to a Forwarder via TCP" /></p></section><section>
<h3><a name="Sending_Directly_to_a_Log_Aggregator_via_TCP"></a>Sending Directly to a Log Aggregator via TCP</h3>
<p>Similar to sending logs to a forwarder, logs can also be sent to a cluster of aggregators. However,
setting this up is not as simple since, to be highly available, a cluster of aggregators must be used.
However, the SocketAppender currently can only be configured with a single host and port. To allow
for failover if the primary aggregator fails the SocketAppender must be enclosed in a
<a href="appenders.html#FailoverAppender">FailoverAppender</a>,
which would also have the secondary aggregator configured. Another option is to have the SocketAppender
point to a highly available proxy that can forward to the Log Aggregator.</p>
<p>If the log aggregator used is Apache Flume or Apache Kafka (or similar) the Appenders for these support
being configured with a list of hosts and ports so high availability is not an issue.</p>
<p><img src="../images/LoggerAggregator.png" alt="Aggregator" title="Application Logging to an Aggregator via TCP" /></p></section></section><section>
<h2><a name="Logging_using_Elasticsearch.2C_Logstash.2C_and_Kibana"></a><a name="ELK"></a>Logging using Elasticsearch, Logstash, and Kibana</h2>
<p>There are various approaches with different trade-offs for ingesting logs into
an ELK stack. Here we will briefly cover how one can forward Log4j generated
events first to Logstash and then to Elasticsearch.</p><section>
<h3><a name="Log4j_Configuration"></a>Log4j Configuration</h3></section><section>
<h3><a name="JsonTemplateLayout"></a>JsonTemplateLayout</h3>
<p>Log4j provides a multitude of JSON generating layouts. In particular, <a href="layouts.html#JSONTemplateLayout">JSON
Template Layout</a> allows full schema
customization and bundles ELK-specific layouts by default, which makes it a
great fit for the bill. Using the EcsLayout template as shown below will generate data in Kibana where
the message displayed exactly matches the message passed to Log4j and most of the event attributes, including
any exceptions, are present as individual attributes that can be displayed. Note, however that stack traces
will be formatted without newlines.</p>
<div class="source"><pre class="prettyprint"><code>&lt;Socket name=&quot;Logstash&quot;
host=&quot;${sys:logstash.host}&quot;
port=&quot;12345&quot;
protocol=&quot;tcp&quot;
bufferedIo=&quot;true&quot;&gt;
&lt;JsonTemplateLayout eventTemplateUri=&quot;classpath:EcsLayout.json&quot;&gt;
&lt;EventTemplateAdditionalField key=&quot;containerId&quot; value=&quot;${docker:containerId:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;application&quot; value=&quot;${lower:${spring:spring.application.name:-spring}}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.serviceAccountName&quot; value=&quot;${k8s:accountName:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.containerId&quot; value=&quot;${k8s:containerId:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.containerName&quot; value=&quot;${k8s:containerName:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.host&quot; value=&quot;${k8s:host:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.labels.app&quot; value=&quot;${k8s:labels.app:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.labels.pod-template-hash&quot; value=&quot;${k8s:labels.podTemplateHash:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.master_url&quot; value=&quot;${k8s:masterUrl:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.namespaceId&quot; value=&quot;${k8s:namespaceId:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.namespaceName&quot; value=&quot;${k8s:namespaceName:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.podID&quot; value=&quot;${k8s:podId:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.podIP&quot; value=&quot;${k8s:podIp:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.podName&quot; value=&quot;${k8s:podName:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.imageId&quot; value=&quot;${k8s:imageId:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.imageName&quot; value=&quot;${k8s:imageName:-}&quot;/&gt;
&lt;/JsonTemplateLayout&gt;
&lt;/Socket&gt;
</code></pre></div><section>
<h4><a name="Gelft_Template"></a>Gelft Template</h4>
<p>The JsonTemplateLayout can also be used to generate JSON that matches the GELF specification which can format
the message attribute using a pattern in accordance with the PatternLayout. For example, the following
template, named EnhancedGelf.json, can be used to generate GELF-compliant data that can be passed to Logstash.
With this template the message attribute will include the thread id, level, specific ThreadContext attributes,
the class name, method name, and line number as well as the message. If an exception is included it will also
be included with newlines. This format follows very closely what you would see in a typical log file on disk
using the PatternLayout but has the additional advantage of including the attributes as separate fields that
can be queried.</p>
<div class="source"><pre class="prettyprint"><code>{
&quot;version&quot;: &quot;1.1&quot;,
&quot;host&quot;: &quot;${hostName}&quot;,
&quot;short_message&quot;: {
&quot;$resolver&quot;: &quot;message&quot;,
&quot;stringified&quot;: true
},
&quot;full_message&quot;: {
&quot;$resolver&quot;: &quot;message&quot;,
&quot;pattern&quot;: &quot;[%t] %-5p %X{requestId, sessionId, loginId, userId, ipAddress, corpAcctNumber} %C{1.}.%M:%L - %m&quot;,
&quot;stringified&quot;: true
},
&quot;timestamp&quot;: {
&quot;$resolver&quot;: &quot;timestamp&quot;,
&quot;epoch&quot;: {
&quot;unit&quot;: &quot;secs&quot;
}
},
&quot;level&quot;: {
&quot;$resolver&quot;: &quot;level&quot;,
&quot;field&quot;: &quot;severity&quot;,
&quot;severity&quot;: {
&quot;field&quot;: &quot;code&quot;
}
},
&quot;_logger&quot;: {
&quot;$resolver&quot;: &quot;logger&quot;,
&quot;field&quot;: &quot;name&quot;
},
&quot;_thread&quot;: {
&quot;$resolver&quot;: &quot;thread&quot;,
&quot;field&quot;: &quot;name&quot;
},
&quot;_mdc&quot;: {
&quot;$resolver&quot;: &quot;mdc&quot;,
&quot;flatten&quot;: {
&quot;prefix&quot;: &quot;_&quot;
},
&quot;stringified&quot;: true
}
}
</code></pre></div>
<p>The logging configuration to use this template would be</p>
<div class="source"><pre class="prettyprint"><code>&lt;Socket name=&quot;Elastic&quot;
host=&quot;\${sys:logstash.search.host}&quot;
port=&quot;12222&quot;
protocol=&quot;tcp&quot;
bufferedIo=&quot;true&quot;&gt;
&lt;JsonTemplateLayout eventTemplateUri=&quot;classpath:EnhancedGelf.json&quot; nullEventDelimiterEnabled=&quot;true&quot;&gt;
&lt;EventTemplateAdditionalField key=&quot;containerId&quot; value=&quot;${docker:containerId:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;application&quot; value=&quot;${lower:${spring:spring.application.name:-spring}}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.serviceAccountName&quot; value=&quot;${k8s:accountName:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.containerId&quot; value=&quot;${k8s:containerId:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.containerName&quot; value=&quot;${k8s:containerName:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.host&quot; value=&quot;${k8s:host:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.labels.app&quot; value=&quot;${k8s:labels.app:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.labels.pod-template-hash&quot; value=&quot;${k8s:labels.podTemplateHash:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.master_url&quot; value=&quot;${k8s:masterUrl:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.namespaceId&quot; value=&quot;${k8s:namespaceId:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.namespaceName&quot; value=&quot;${k8s:namespaceName:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.podID&quot; value=&quot;${k8s:podId:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.podIP&quot; value=&quot;${k8s:podIp:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.podName&quot; value=&quot;${k8s:podName:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.imageId&quot; value=&quot;${k8s:imageId:-}&quot;/&gt;
&lt;EventTemplateAdditionalField key=&quot;kubernetes.imageName&quot; value=&quot;${k8s:imageName:-}&quot;/&gt;
&lt;/JsonTemplateLayout&gt;
&lt;/Socket&gt;
</code></pre></div>
<p>The significant difference with this configuration from the first example is that it references the
custom template and it specifies an event delimiter of a null character (&#x2018;\0&#x2019;);</p>
<p><b>Note</b>: The level being passed with the above template does not strictly conform to the GELF spec as the
Level being passed is the Log4j Level NOT the Level defined in the GELF spec. However, testing has shown
that Logstash, Elk, and Kibana are pretty tolerant of whatever data is passed to it.</p></section><section>
<h4><a name="Custom_Template"></a>Custom Template</h4>
<p>Another option is to use a custom template, possibly based on one of the standard templates. The template
below is loosely based on ECS but a) adds the spring boot application name, b) formats the message
using PatternLayout, formats Map Messages as event.data attributes while setting the event action based on
any Marker included in the event, includes all the ThreadContext attributes.</p>
<p><b>Note</b>: The Json Template Layout escapes control sequences so messages that contain &#x2018;\n&#x2019; will have those
control sequences copied as &#x201c;\n&#x201d; into the text rather than converted to a newline character. This bypasses
many problems that occur with Log Forwarders such as Filebeat and FluentBit/Fluentd. Kibana will correctly
interpret these squences as newlines and display them correctly. Also note that the message pattern does
not contain a timestamp. Kibana will display the timestamp field in its own column so placing it in the
message would be redundant.</p>
<div class="source"><pre class="prettyprint"><code>{
&quot;@timestamp&quot;: {
&quot;$resolver&quot;: &quot;timestamp&quot;,
&quot;pattern&quot;: {
&quot;format&quot;: &quot;yyyy-MM-dd'T'HH:mm:ss.SSS'Z'&quot;,
&quot;timeZone&quot;: &quot;UTC&quot;
}
},
&quot;ecs.version&quot;: &quot;1.11.0&quot;,
&quot;log.level&quot;: {
&quot;$resolver&quot;: &quot;level&quot;,
&quot;field&quot;: &quot;name&quot;
},
&quot;application&quot;: &quot;\${lower:\${spring:spring.application.name}}&quot;,
&quot;short_message&quot;: {
&quot;$resolver&quot;: &quot;message&quot;,
&quot;stringified&quot;: true
},
&quot;message&quot;: {
&quot;$resolver&quot;: &quot;pattern&quot;,
&quot;pattern&quot;: &quot;[%t] %X{requestId, sessionId, loginId, userId, ipAddress, accountNumber} %C{1.}.%M:%L - %m%n&quot;
},
&quot;process.thread.name&quot;: {
&quot;$resolver&quot;: &quot;thread&quot;,
&quot;field&quot;: &quot;name&quot;
},
&quot;log.logger&quot;: {
&quot;$resolver&quot;: &quot;logger&quot;,
&quot;field&quot;: &quot;name&quot;
},
&quot;event.action&quot;: {
&quot;$resolver&quot;: &quot;marker&quot;,
&quot;field&quot;: &quot;name&quot;
},
&quot;event.data&quot;: {
&quot;$resolver&quot;: &quot;map&quot;,
&quot;stringified&quot;: true
},
&quot;labels&quot;: {
&quot;$resolver&quot;: &quot;mdc&quot;,
&quot;flatten&quot;: true,
&quot;stringified&quot;: true
},
&quot;tags&quot;: {
&quot;$resolver&quot;: &quot;ndc&quot;
},
&quot;error.type&quot;: {
&quot;$resolver&quot;: &quot;exception&quot;,
&quot;field&quot;: &quot;className&quot;
},
&quot;error.message&quot;: {
&quot;$resolver&quot;: &quot;exception&quot;,
&quot;field&quot;: &quot;message&quot;
},
&quot;error.stack_trace&quot;: {
&quot;$resolver&quot;: &quot;exception&quot;,
&quot;field&quot;: &quot;stackTrace&quot;,
&quot;stackTrace&quot;: {
&quot;stringified&quot;: true
}
}
}
</code></pre></div>
<p>Finally, the GelfLayout can be used to generate GELF compliant output. Unlike the JsonTemplateLayout it
adheres closely to the GELF spec.</p>
<div class="source"><pre class="prettyprint"><code>&lt;Socket name=&quot;Elastic&quot; host=&quot;${sys:elastic.search.host}&quot; port=&quot;12222&quot; protocol=&quot;tcp&quot; bufferedIo=&quot;true&quot;&gt;
&lt;GelfLayout includeStackTrace=&quot;true&quot; host=&quot;${hostName}&quot; includeThreadContext=&quot;true&quot; includeNullDelimiter=&quot;true&quot;
compressionType=&quot;OFF&quot;&gt;
&lt;ThreadContextIncludes&gt;requestId,sessionId,loginId,userId,ipAddress,callingHost&lt;/ThreadContextIncludes&gt;
&lt;MessagePattern&gt;%d [%t] %-5p %X{requestId, sessionId, loginId, userId, ipAddress} %C{1.}.%M:%L - %m%n&lt;/MessagePattern&gt;
&lt;KeyValuePair key=&quot;containerId&quot; value=&quot;${docker:containerId:-}&quot;/&gt;
&lt;KeyValuePair key=&quot;application&quot; value=&quot;${lower:${spring:spring.application.name:-spring}}&quot;/&gt;
&lt;KeyValuePair key=&quot;kubernetes.serviceAccountName&quot; value=&quot;${k8s:accountName:-}&quot;/&gt;
&lt;KeyValuePair key=&quot;kubernetes.containerId&quot; value=&quot;${k8s:containerId:-}&quot;/&gt;
&lt;KeyValuePair key=&quot;kubernetes.containerName&quot; value=&quot;${k8s:containerName:-}&quot;/&gt;
&lt;KeyValuePair key=&quot;kubernetes.host&quot; value=&quot;${k8s:host:-}&quot;/&gt;
&lt;KeyValuePair key=&quot;kubernetes.labels.app&quot; value=&quot;${k8s:labels.app:-}&quot;/&gt;
&lt;KeyValuePair key=&quot;kubernetes.labels.pod-template-hash&quot; value=&quot;${k8s:labels.podTemplateHash:-}&quot;/&gt;
&lt;KeyValuePair key=&quot;kubernetes.master_url&quot; value=&quot;${k8s:masterUrl:-}&quot;/&gt;
&lt;KeyValuePair key=&quot;kubernetes.namespaceId&quot; value=&quot;${k8s:namespaceId:-}&quot;/&gt;
&lt;KeyValuePair key=&quot;kubernetes.namespaceName&quot; value=&quot;${k8s:namespaceName:-}&quot;/&gt;
&lt;KeyValuePair key=&quot;kubernetes.podID&quot; value=&quot;${k8s:podId:-}&quot;/&gt;
&lt;KeyValuePair key=&quot;kubernetes.podIP&quot; value=&quot;${k8s:podIp:-}&quot;/&gt;
&lt;KeyValuePair key=&quot;kubernetes.podName&quot; value=&quot;${k8s:podName:-}&quot;/&gt;
&lt;KeyValuePair key=&quot;kubernetes.imageId&quot; value=&quot;${k8s:imageId:-}&quot;/&gt;
&lt;KeyValuePair key=&quot;kubernetes.imageName&quot; value=&quot;${k8s:imageName:-}&quot;/&gt;
&lt;/GelfLayout&gt;
&lt;/Socket&gt;
</code></pre></div></section><section>
<h4><a name="Logstash_Configuration_with_Gelf"></a>Logstash Configuration with Gelf</h4>
<p>We will configure Logstash to listen on TCP port 12345 for payloads of type JSON
and then forward these to (either console and/or) an Elasticsearch server.</p>
<div class="source"><pre class="prettyprint"><code>input {
tcp {
port =&gt; 12345
codec =&gt; &quot;json&quot;
}
}
output {
# (Un)comment for debugging purposes.
# stdout { codec =&gt; rubydebug }
# Modify the hosts value to reflect where elasticsearch is installed.
elasticsearch {
hosts =&gt; [&quot;http://localhost:9200/&quot;]
index =&gt; &quot;app-%{application}-%{+YYYY.MM.dd}&quot;
}
}
</code></pre></div></section><section>
<h4><a name="Logstash_Configuration_with_JsonTemplateLayout"></a>Logstash Configuration with JsonTemplateLayout</h4>
<p>When one of the GELF compliant formats is used Logstash should be configured as</p>
<p>gelf {
host =&gt; &#x201c;localhost&#x201d;
use_tcp =&gt; true
use_udp =&gt; false
port =&gt; 12222
type =&gt; &#x201c;gelf&#x201d;
}
}</p>
<div class="source"><pre class="prettyprint"><code> filter {
# These are GELF/Syslog logging levels as defined in RFC 3164. Map the integer level to its human readable format.
translate {
field =&gt; &quot;[level]&quot;
destination =&gt; &quot;[levelName]&quot;
dictionary =&gt; {
&quot;0&quot; =&gt; &quot;EMERG&quot;
&quot;1&quot; =&gt; &quot;ALERT&quot;
&quot;2&quot; =&gt; &quot;CRITICAL&quot;
&quot;3&quot; =&gt; &quot;ERROR&quot;
&quot;4&quot; =&gt; &quot;WARN&quot;
&quot;5&quot; =&gt; &quot;NOTICE&quot;
&quot;6&quot; =&gt; &quot;INFO&quot;
&quot;7&quot; =&gt; &quot;DEBUG&quot;
}
}
}
output {
# (Un)comment for debugging purposes
# stdout { codec =&gt; rubydebug }
# Modify the hosts value to reflect where elasticsearch is installed.
elasticsearch {
hosts =&gt; [&quot;http://localhost:9200/&quot;]
index =&gt; &quot;app-%{application}-%{+YYYY.MM.dd}&quot;
}
}
</code></pre></div></section><section>
<h4><a name="Filebeat_configuration_with_JsonTemplateLayout"></a>Filebeat configuration with JsonTemplateLayout</h4>
<p>When using a JsonTemplateLayout that complies with ECS (or is similar to the custom template previously shown)
the configuration of filebeat is straightforward.</p>
<div class="source"><pre class="prettyprint"><code>filebeat.inputs:
- type: log
enabled: true
json.keys_under_root: true
paths:
- /var/log/apps/*.log
</code></pre></div></section></section><section>
<h3><a name="Kibana"></a>Kibana</h3>
<p>Using the EnhancedGelf template, the GelfLayout or the custom template the above configurations the message
field will contain a fully formatted log event just as it would appear in a file Appender. The ThreadContext
attributes, custome fields, thread name, etc. will all be available as attributes on each log event that can
be used for filtering. The result will resemble
<img src="../images/kibana.png" alt="" /></p></section></section><section>
<h2><a name="Managing_Logging_Configuration"></a>Managing Logging Configuration</h2>
<p>Spring Boot provides another least common denominator approach to logging configuration. It will let you set the
log level for various Loggers within an application which can be dynamically updated via REST endpoints provided
by Spring. While this works in a lot of cases it does not support any of the more advanced filtering features of
Log4j. For example, since it cannot add or modify any Filters other than the log level of a logger, changes cannot be made to allow
all log events for a specific user or customer to temporarily be logged
(see <a href="filters.html#DynamicThresholdFilter">DynamicThresholdFilter</a> or
<a href="filters.html#ThreadContextMapFilter">ThreadContextMapFilter</a>) or any other kinds of changes to filters.
Also, in a microservices, clustered environment it is quite likely that these changes will need to be propagated
to multiple servers at the same time. Trying to achieve this via REST calls could be difficult.</p>
<p>Since its first release Log4j has supported reconfiguration through a file.
Beginning with Log4j 2.12.0 Log4j also supports accessing the configuration via HTTP(S) and monitoring the file
for changes by using the HTTP &#x201c;If-Modified-Since&#x201d; header. A patch has also been integrated into Spring Cloud Config
starting with versions 2.0.3 and 2.1.1 for it to honor the If-Modified-Since header. In addition, the
log4j-spring-cloud-config project will listen for update events published by Spring Cloud Bus and then verify
that the configuration file has been modified, so polling via HTTP is not required.</p>
<p>Log4j also supports composite configurations. A distributed application spread across microservices could
share a common configuration file that could be used to control things like enabling debug logging for a
specific user.</p>
<p>While the standard Spring Boot REST endpoints to update logging will still work any changes made by those
REST endpoints will be lost if Log4j reconfigures itself do to changes in the logging configuration file.</p>
<p>Further information regarding integration of the log4j-spring-cloud-config-client can be found at
<a href="../log4j-spring-cloud-config/log4j-spring-cloud-config-client/index.html">Log4j Spring Cloud Config Client</a>.</p></section><section>
<h2><a name="Integration_with_Spring_Boot"></a>Integration with Spring Boot</h2>
<p>Log4j integrates with Spring Boot in 2 ways:</p>
<ol style="list-style-type: decimal">
<li>A Spring Lookup can be used to access the Spring application configuration from Log4j configuration files.</li>
<li>Log4j will access the Spring configuration when it is trying to resolve log4j system properties.</li>
</ol>
<p>Both of these require that the log4j-spring-cloud-client jar is included in the application.</p></section><section>
<h2><a name="Integration_with_Docker"></a>Integration with Docker</h2>
<p>Applications within a Docker container that log using a Docker logging driver can include special
attributes in the formatted log event as described at
<a class="externalLink" href="https://docs.docker.com/config/containers/logging/log_tags/">Customize Log Driver Output</a>. Log4j
provides similar functionality via the <a href="lookups.html#DockerLookup">Docker Lookup</a>. More information on
Log4j's Docker support may also be found at <a href="../log4j-docker/index.html">Log4j-Docker</a>.</p></section><section>
<h2><a name="Integration_with_Kubernetes"></a>Integration with Kubernetes</h2>
<p>Applications managed by Kubernetes can bypass the Docker/Kubernetes logging infrastructure and log directly to
either a sidecar forwarder or a logging aggragator cluster while still including all the kubernetes
attributes by using the Log4j 2 <a href="lookups.html#KubernetesLookup">Kubernetes Lookup</a>. More information on
Log4j's Kubernetes support may also be found at <a href="../log4j-kubernetes/index.html">Log4j-Kubernetes</a>.</p></section><section>
<h2><a name="Appender_Performance"></a>Appender Performance</h2>
<p>The numbers in the table below represent how much time in seconds was required for the application to
call <code>logger.debug(...)</code> 100,000 times. These numbers only include the time taken to deliver to the specifically
noted endpoint and many not include the actual time required before they are available for viewing. All
measurements were performed on a MacBook Pro with a 2.9GHz Intel Core I9 processor with 6 physical and 12
logical cores, 32GB of 2400 MHz DDR4 RAM, and 1TB of Apple SSD storage. The VM used by Docker was managed
by VMWare Fusion and had 4 CPUs and 2 GB of RAM. These number should be used for relative performance comparisons
as the results on another system may vary considerably.</p>
<p>The sample application used can be found under the log4j-spring-cloud-config/log4j-spring-cloud-config-samples
directory in the Log4j <a class="externalLink" href="https://github.com/apache/logging-log4j2">source repository</a>.</p>
<table border="0" class="table table-striped">
<thead>
<tr class="a">
<th>Test</th>
<th align="right">1 Thread</th>
<th align="right">2 Threads</th>
<th align="right">4 Threads</th>
<th align="right">8 Threads</th></tr>
</thead><tbody>
<tr class="b">
<td colspan="5" align="left">Flume Avro</td></tr>
<tr class="a">
<td align="left">- Batch Size 1 - JSON</td>
<td align="right">49.11</td>
<td align="right">46.54</td>
<td align="right">46.70</td>
<td align="right">44.92</td></tr>
<tr class="b">
<td align="left">- Batch Size 1 - RFC5424</td>
<td align="right">48.30</td>
<td align="right">45.79</td>
<td align="right">46.31</td>
<td align="right">45.50</td></tr>
<tr class="a">
<td align="left">- Batch Size 100 - JSON</td>
<td align="right">6.33</td>
<td align="right">3.87</td>
<td align="right">3.57</td>
<td align="right">3.84</td></tr>
<tr class="b">
<td align="left">- Batch Size 100 - RFC5424</td>
<td align="right">6.08</td>
<td align="right">3.69</td>
<td align="right">3.22</td>
<td align="right">3.11</td></tr>
<tr class="a">
<td align="left">- Batch Size 1000 - JSON</td>
<td align="right">4.83</td>
<td align="right">3.20</td>
<td align="right">3.02</td>
<td align="right">2.11</td></tr>
<tr class="b">
<td align="left">- Batch Size 1000 - RFC5424</td>
<td align="right">4.70</td>
<td align="right">2.40</td>
<td align="right">2.37</td>
<td align="right">2.37</td></tr>
<tr class="a">
<td colspan="5" align="left">Flume Embedded</td></tr>
<tr class="b">
<td align="left">- RFC5424</td>
<td align="right">3.58</td>
<td align="right">2.10</td>
<td align="right">2.10</td>
<td align="right">2.70</td></tr>
<tr class="a">
<td align="left">- JSON</td>
<td align="right">4.20</td>
<td align="right">2.49</td>
<td align="right">3.53</td>
<td align="right">2.90</td></tr>
<tr class="b">
<td colspan="5" align="left">Kafka Local JSON</td></tr>
<tr class="a">
<td align="left">- sendSync true</td>
<td align="right">58.46</td>
<td align="right">38.55</td>
<td align="right">19.59</td>
<td align="right">19.01</td></tr>
<tr class="b">
<td align="left">- sendSync false</td>
<td align="right">9.8</td>
<td align="right">10.8</td>
<td align="right">12.23</td>
<td align="right">11.36</td></tr>
<tr class="a">
<td colspan="5" align="left">Console</td></tr>
<tr class="b">
<td align="left">- JSON / Kubernetes</td>
<td align="right">3.03</td>
<td align="right">3.11</td>
<td align="right">3.04</td>
<td align="right">2.51</td></tr>
<tr class="a">
<td align="left">- JSON</td>
<td align="right">2.80</td>
<td align="right">2.74</td>
<td align="right">2.54</td>
<td align="right">2.35</td></tr>
<tr class="b">
<td align="left">- Docker fluentd driver</td>
<td align="right">10.65</td>
<td align="right">9.92</td>
<td align="right">10.42</td>
<td align="right">10.27</td></tr>
<tr class="a">
<td colspan="5" align="left">Rolling File</td></tr>
<tr class="b">
<td align="left">- RFC5424</td>
<td align="right">1.65</td>
<td align="right">0.94</td>
<td align="right">1.22</td>
<td align="right">1.55</td></tr>
<tr class="a">
<td align="left">- JSON</td>
<td align="right">1.90</td>
<td align="right">0.95</td>
<td align="right">1.57</td>
<td align="right">1.94</td></tr>
<tr class="b">
<td align="left">TCP - Fluent Bit - JSON</td>
<td align="right">2.34</td>
<td align="right">2.167</td>
<td align="right">1.67</td>
<td align="right">2.50</td></tr>
<tr class="a">
<td colspan="5" align="left">Async Logger</td></tr>
<tr class="b">
<td align="left">- TCP - Fluent Bit - JSON</td>
<td align="right">0.90</td>
<td align="right">0.58</td>
<td align="right">0.36</td>
<td align="right">0.48</td></tr>
<tr class="a">
<td align="left">- Console - JSON</td>
<td align="right">0.83</td>
<td align="right">0.57</td>
<td align="right">0.55</td>
<td align="right">0.61</td></tr>
<tr class="b">
<td align="left">- Flume Avro - 1000 - JSON</td>
<td align="right">0.76</td>
<td align="right">0.37</td>
<td align="right">0.45</td>
<td align="right">0.68</td></tr>
</tbody>
</table>
<p>Notes:</p>
<ol style="list-style-type: decimal">
<li>Flume Avro - Buffering is controlled by the batch size. Each send is complete when the remote
acknowledges the batch was written to its channel. These number seem to indicate Flume Avro could
benefit from using a pool of RPCClients, at least for a batchSize of 1.</li>
<li>Flume Embedded - This is essentially asynchronous as it writes to an in-memory buffer. It is
unclear why the performance isn't closer to the AsyncLogger results.</li>
<li>Kafka was run in standalone mode on the same laptop as the application. See sendSync set to true
requires waiting for an ack from Kafka for each log event.</li>
<li>Console - System.out is redirected to a file by Docker. Testing shows that it would be much
slower if it was writing to the terminal screen.</li>
<li>Rolling File - Test uses the default buffer size of 8K.</li>
<li>TCP to Fluent Bit - The Socket Appender uses a default buffer size of 8K.</li>
<li>Async Loggers - These all write to a circular buffer and return to the application. The actual
I/O will take place on a separate thread. If writing the events is performed more slowly than
events are being created eventually the buffer will fill up and logging will be performed at
the same pace that log events are written.</li>
</ol></section><section>
<h2><a name="Logging_Recommendations"></a>Logging Recommendations</h2>
<ol style="list-style-type: decimal">
<li>Use asynchronous logging unless guaranteed delivery is absolutely required. As
the performance numbers show, so long as the volume of logging is not high enough to fill up the
circular buffer the overhead of logging will almost be unnoticeable to the application.</li>
<li>If overall performance is a consideration or you require multiline events such as stack traces
be processed properly then log via TCP to a companion container that acts as a log forwarder or directly
to a log aggregator as shown above in <a href="#ELK">Logging with ELK</a>. Use the<br />
Log4j Docker Lookup to add the container information to each log event.</li>
<li>Whenever guaranteed delivery is required use Flume Avro with a batch size of 1 or another Appender such
as the Kafka Appender with syncSend set to true that only return control after the downstream agent
acknowledges receipt of the event. Beware that using an Appender that writes each event individually should
be kept to a minimum since it is much slower than sending buffered events.</li>
<li>Logging to files within the container is discouraged. Doing so requires that a volume be declared in
the Docker configuration and that the file be tailed by a log forwarder. However, it performs
better than logging to the standard output stream. If logging via TCP is not an option and
proper multiline handling is required then consider this option.</li>
</ol></section>
</main>
</div>
</div>
<hr/>
<footer>
<div class="container-fluid">
<div class="row-fluid">
<p align="center">Copyright &copy; 1999-2024 <a class="external" href="https://www.apache.org">The Apache Software Foundation</a>. All Rights Reserved.<br>
Apache Logging, Apache Log4j, Log4j, Apache, the Apache feather logo, and the Apache Logging project logo are trademarks of The Apache Software Foundation.</p>
</div>
</div>
</footer>
<script>
if(anchors) {
anchors.add();
}
</script>
</body>
</html>