| <?xml version="1.0" encoding="UTF-8"?> |
| <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"> |
| <channel> |
| <title>Apache Accumulo™</title> |
| <description>The Apache Accumulo™ sorted, distributed key/value store is a robust, scalable, high performance data storage and retrieval system. |
| </description> |
| <link>https://accumulo.apache.org/</link> |
| <atom:link href="https://accumulo.apache.org/feed.xml" rel="self" type="application/rss+xml"/> |
| <pubDate>Thu, 02 May 2024 23:41:41 +0000</pubDate> |
| <lastBuildDate>Thu, 02 May 2024 23:41:41 +0000</lastBuildDate> |
| <generator>Jekyll v4.3.3</generator> |
| |
| |
| <item> |
| <title>Does a compactor process return memory to the OS?</title> |
| <description><h2 id="goal">Goal</h2> |
| <p>The goal of the project was to determine if, once an Accumulo process is finished using memory, the JVM would release this unused memory back to the operating system. This was specifically observed in a Compactor process during the tests, but the findings should apply to any Accumulo Server process. We looked at the memory usage of the compactor process specifically to help understand if oversubscribing compactors on a machine is a viable option.</p> |
| |
| <p>As background information, it’s important to note that modern JVMs are expected to release memory back to the operating system, rather than just growing from the initial heap size (-Xms) to the maximum heap size (-Xmx) and never releasing it. This behavior was introduced in Java 11 through the <a href="https://openjdk.org/jeps/346">JEP 346: Promptly Return Unused Committed Memory from G1</a>. This feature aims to improve the efficiency of memory usage by actively returning Java heap memory to the operating system when idle.</p> |
| <h3 id="test-scenario">Test Scenario</h3> |
| <p>There could be a scenario where the amount of memory on a machine limits the number of compactors that can be run. For example, on a machine with 32GB of memory, if each compactor process uses 6GB of memory, we can only “fit” 5 compactors on that machine (32/6=5.333). Since each compactor process only runs on a single core, we would only be utilizing 5 cores on that machine where we would like to be using as many as we can.</p> |
| |
| <p>If the compactor process does not return the memory to the OS, then we are stuck with only using the following number of compactor processes: |
| <code class="language-plaintext highlighter-rouge">(total memory)/(memory per compactor)</code>. |
| If the compactor processes return the memory to the OS, i.e. does not stay at the maximum 6GB once they reach it, then we can oversubscribe the memory allowing us to run more compactor processes on that machine.</p> |
| |
| <p>It should be noted that there is an inherent risk when oversubscribing processes that the user must be willing to accept if they choose to do oversubscribe. In this case, there is the possibility that all compactors run at the same time which might use all the memory on the machine. This could cause one or more of the compactor processes to be killed by the OOM killer.</p> |
| |
| <h2 id="test-setup">Test Setup</h2> |
| |
| <h3 id="environment-prerequisites">Environment Prerequisites</h3> |
| |
| <p>The machines used for testing were running Pop!_OS 22.04 a debian-based OS. The following package installation and usage steps may vary if one were try to repeat these steps.</p> |
| |
| <h4 id="install-gnuplot">Install gnuplot</h4> |
| |
| <p>This was used for plotting the memory usage of the compactor over time from the perspective of the OS</p> |
| |
| <ol> |
| <li><code class="language-plaintext highlighter-rouge">sudo apt install gnuplot</code></li> |
| <li>gnuplot was started with the command <code class="language-plaintext highlighter-rouge">gnuplot</code></li> |
| </ol> |
| |
| <h4 id="install-visualvm">Install VisualVM</h4> |
| |
| <p>This was used for plotting the memory usage of the compactor over time from the perspective of the JVM</p> |
| |
| <ol> |
| <li>Downloaded the zip from <a href="https://visualvm.github.io/">visualvm.github.io</a></li> |
| <li>Extracted with <code class="language-plaintext highlighter-rouge">unzip visualvm_218.zip</code></li> |
| <li>VisualVM was started with the command <code class="language-plaintext highlighter-rouge">./path/to/visualvm_218/bin/visualvm</code></li> |
| </ol> |
| |
| <h4 id="configure-and-start-accumulo">Configure and start accumulo</h4> |
| |
| <p>Accumulo 2.1 was used for experimentation. To stand up a single node instance, <a href="https://github.com/apache/fluo-uno">fluo-uno</a> was used.</p> |
| |
| <p>Steps taken to configure accumulo to start compactors:</p> |
| |
| <ol> |
| <li>Uncommented lines in <code class="language-plaintext highlighter-rouge">fluo-uno/install/accumulo-2.1.2/conf/cluster.yaml</code> regarding the compaction coordinator and compactor q1. A single compactor process was used, q1. This allows the external compaction processes to start up.</li> |
| <li>Configured the java args for the compactor process in “accumulo-env.sh.” Line: |
| <code class="language-plaintext highlighter-rouge">compactor) JAVA_OPTS=('-Xmx256m' '-Xms256m' "${JAVA_OPTS[@]}") ;;</code></li> |
| <li>Started accumulo with <code class="language-plaintext highlighter-rouge">uno start accumulo</code></li> |
| </ol> |
| |
| <h4 id="install-java-versions">Install java versions</h4> |
| |
| <ol> |
| <li>Installed java versions 11, 17 and 21. For example, Java 17 was installed with: |
| <ol> |
| <li><code class="language-plaintext highlighter-rouge">sudo apt install openjdk-17-jdk</code></li> |
| <li><code class="language-plaintext highlighter-rouge">sudo update-alternatives --config java</code> and select the intended version before starting the accumulo instance</li> |
| <li>Ensured <code class="language-plaintext highlighter-rouge">JAVA_HOME</code> was set to the intended version of java before each test run</li> |
| </ol> |
| </li> |
| </ol> |
| |
| <h2 id="running-the-test">Running the test</h2> |
| |
| <ol> |
| <li>Started accumulo using <a href="https://github.com/apache/fluo-uno">fluo-uno</a> (after changing the mentioned configuration) |
| <ul> |
| <li><code class="language-plaintext highlighter-rouge">uno start accumulo</code></li> |
| </ul> |
| </li> |
| <li>Opened VisualVM and selected the running compactor q1 process taking note of the PID</li> |
| <li>Ran <code class="language-plaintext highlighter-rouge">mem_usage_script.sh &lt;compactor process PID&gt;</code>. This collected measurements of memory used by the compactor process over time from the perspective of the OS. We let this continue to run while the compaction script was running.</li> |
| <li>Configured the external compaction script as needed and executed: |
| <ul> |
| <li><code class="language-plaintext highlighter-rouge">uno jshell experiment.jsh</code></li> |
| </ul> |
| </li> |
| <li>Memory usage was monitored from the perspective of the JVM (using VisualVM) and from the perspective of the OS (using our collection script). |
| Navigated to the “Monitor” tab of the compactor in VisualVM to see the graph of memory usage from JVM perspective. |
| Followed the info given in the <a href="#os-memory-data-collection-script">OS Memory Data Collection Script</a> section to plot the memory usage from OS perspective.</li> |
| </ol> |
| |
| <p>Helpful resources:</p> |
| <ul> |
| <li><a href="https://accumulo.apache.org/blog/2021/07/08/external-compactions.html">External Compactions accumulo blog post</a></li> |
| <li><a href="https://docs.oracle.com/en/java/javase/21/gctuning/z-garbage-collector.html#GUID-8637B158-4F35-4E2D-8E7B-9DAEF15BB3CD">Z garbage collector heap size docs</a></li> |
| <li><a href="https://docs.oracle.com/en/java/javase/21/gctuning/garbage-collector-implementation.html#GUID-71D796B3-CBAB-4D80-B5C3-2620E45F6E5D">Generational Garbage Collection docs</a></li> |
| <li><a href="https://docs.oracle.com/en/java/javase/21/gctuning/garbage-first-g1-garbage-collector1.html#GUID-ED3AB6D3-FD9B-4447-9EDF-983ED2F7A573">G1 garbage collector docs</a></li> |
| <li><a href="https://thomas.preissler.me/blog/2021/05/02/release-memory-back-to-the-os-with-java-11">Java 11 and memory release article</a></li> |
| </ul> |
| |
| <h3 id="external-compaction-test-script">External compaction test script</h3> |
| |
| <p>Initiates an external compaction of 700MB of data (20 files of size 35MB) on Compactor q1.</p> |
| |
| <p><strong><em>referred to as experiment.jsh in the test setup section</em></strong></p> |
| |
| <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="nn">org.apache.accumulo.core.conf.Property</span><span class="o">;</span> |
| |
| <span class="kt">int</span> <span class="n">dataSize</span> <span class="o">=</span> <span class="mi">35_000_000</span><span class="o">;</span> |
| <span class="kt">byte</span><span class="o">[]</span> <span class="n">data</span> <span class="o">=</span> <span class="k">new</span> <span class="kt">byte</span><span class="o">[</span><span class="n">dataSize</span><span class="o">];</span> |
| <span class="nc">Arrays</span><span class="o">.</span><span class="na">fill</span><span class="o">(</span><span class="n">data</span><span class="o">,</span> <span class="o">(</span><span class="kt">byte</span><span class="o">)</span> <span class="mi">65</span><span class="o">);</span> |
| <span class="nc">String</span> <span class="n">tableName</span> <span class="o">=</span> <span class="s">"testTable"</span><span class="o">;</span> |
| |
| <span class="kt">void</span> <span class="nf">ingestAndCompact</span><span class="o">()</span> <span class="kd">throws</span> <span class="nc">Exception</span> <span class="o">{</span> |
| <span class="k">try</span> <span class="o">{</span> |
| <span class="n">client</span><span class="o">.</span><span class="na">tableOperations</span><span class="o">().</span><span class="na">delete</span><span class="o">(</span><span class="n">tableName</span><span class="o">);</span> |
| <span class="o">}</span> <span class="k">catch</span> <span class="o">(</span><span class="nc">TableNotFoundException</span> <span class="n">e</span><span class="o">)</span> <span class="o">{</span> |
| <span class="c1">// ignore </span> |
| <span class="o">}</span> |
| |
| <span class="nc">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">"Creating table "</span> <span class="o">+</span> <span class="n">tableName</span><span class="o">);</span> |
| <span class="n">client</span><span class="o">.</span><span class="na">tableOperations</span><span class="o">().</span><span class="na">create</span><span class="o">(</span><span class="n">tableName</span><span class="o">);</span> |
| |
| <span class="c1">// This is done to avoid system compactions, we want to initiate the compactions manually </span> |
| <span class="n">client</span><span class="o">.</span><span class="na">tableOperations</span><span class="o">().</span><span class="na">setProperty</span><span class="o">(</span><span class="n">tableName</span><span class="o">,</span> <span class="nc">Property</span><span class="o">.</span><span class="na">TABLE_MAJC_RATIO</span><span class="o">.</span><span class="na">getKey</span><span class="o">(),</span> <span class="s">"1000"</span><span class="o">);</span> |
| <span class="c1">// Configure for external compaction </span> |
| <span class="n">client</span><span class="o">.</span><span class="na">instanceOperations</span><span class="o">().</span><span class="na">setProperty</span><span class="o">(</span><span class="s">"tserver.compaction.major.service.cs1.planner"</span><span class="o">,</span><span class="s">"org.apache.accumulo.core.spi.compaction.DefaultCompactionPlanner"</span><span class="o">);</span> |
| <span class="n">client</span><span class="o">.</span><span class="na">instanceOperations</span><span class="o">().</span><span class="na">setProperty</span><span class="o">(</span><span class="s">"tserver.compaction.major.service.cs1.planner.opts.executors"</span><span class="o">,</span><span class="s">"[{\"name\":\"large\",\"type\":\"external\",\"queue\":\"q1\"}]"</span><span class="o">);</span> |
| |
| <span class="n">client</span><span class="o">.</span><span class="na">tableOperations</span><span class="o">().</span><span class="na">setProperty</span><span class="o">(</span><span class="n">tableName</span><span class="o">,</span> <span class="s">"table.compaction.dispatcher"</span><span class="o">,</span> <span class="s">"org.apache.accumulo.core.spi.compaction.SimpleCompactionDispatcher"</span><span class="o">);</span> |
| <span class="n">client</span><span class="o">.</span><span class="na">tableOperations</span><span class="o">().</span><span class="na">setProperty</span><span class="o">(</span><span class="n">tableName</span><span class="o">,</span> <span class="s">"table.compaction.dispatcher.opts.service"</span><span class="o">,</span> <span class="s">"cs1"</span><span class="o">);</span> |
| |
| <span class="kt">int</span> <span class="n">numFiles</span> <span class="o">=</span> <span class="mi">20</span><span class="o">;</span> |
| |
| <span class="k">try</span> <span class="o">(</span><span class="kt">var</span> <span class="n">writer</span> <span class="o">=</span> <span class="n">client</span><span class="o">.</span><span class="na">createBatchWriter</span><span class="o">(</span><span class="n">tableName</span><span class="o">))</span> <span class="o">{</span> |
| <span class="k">for</span> <span class="o">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="n">numFiles</span><span class="o">;</span> <span class="n">i</span><span class="o">++)</span> <span class="o">{</span> |
| <span class="nc">Mutation</span> <span class="n">mut</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">Mutation</span><span class="o">(</span><span class="s">"r"</span> <span class="o">+</span> <span class="n">i</span><span class="o">);</span> |
| <span class="n">mut</span><span class="o">.</span><span class="na">at</span><span class="o">().</span><span class="na">family</span><span class="o">(</span><span class="s">"cf"</span><span class="o">).</span><span class="na">qualifier</span><span class="o">(</span><span class="s">"cq"</span><span class="o">).</span><span class="na">put</span><span class="o">(</span><span class="n">data</span><span class="o">);</span> |
| <span class="n">writer</span><span class="o">.</span><span class="na">addMutation</span><span class="o">(</span><span class="n">mut</span><span class="o">);</span> |
| <span class="n">writer</span><span class="o">.</span><span class="na">flush</span><span class="o">();</span> |
| |
| <span class="nc">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">"Writing "</span> <span class="o">+</span> <span class="n">dataSize</span> <span class="o">+</span> <span class="s">" bytes to a single value"</span><span class="o">);</span> |
| <span class="n">client</span><span class="o">.</span><span class="na">tableOperations</span><span class="o">().</span><span class="na">flush</span><span class="o">(</span><span class="n">tableName</span><span class="o">,</span> <span class="kc">null</span><span class="o">,</span> <span class="kc">null</span><span class="o">,</span> <span class="kc">true</span><span class="o">);</span> |
| <span class="o">}</span> |
| <span class="o">}</span> |
| |
| <span class="nc">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">"Compacting table"</span><span class="o">);</span> |
| <span class="n">client</span><span class="o">.</span><span class="na">tableOperations</span><span class="o">().</span><span class="na">compact</span><span class="o">(</span><span class="n">tableName</span><span class="o">,</span> <span class="k">new</span> <span class="nc">CompactionConfig</span><span class="o">().</span><span class="na">setWait</span><span class="o">(</span><span class="kc">true</span><span class="o">));</span> |
| <span class="nc">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">"Finished table compaction"</span><span class="o">);</span> |
| <span class="o">}</span> |
| |
| <span class="n">ingestAndCompact</span><span class="o">();</span> |
| <span class="c1">// Optionally sleep and ingestAndCompact() again, or just execute the script again.</span> |
| </code></pre></div></div> |
| |
| <h3 id="os-memory-data-collection-script">OS Memory Data Collection Script</h3> |
| |
| <p>Tracks the Resident Set Size (RSS) of the given PID over time, outputting the data to output_mem_usage.log. |
| Data is taken every 5 seconds for an hour or until stopped.</p> |
| |
| <p><strong><em>referred to as mem_usage_script.sh in the test setup section</em></strong></p> |
| |
| <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash </span> |
| <span class="nv">PID</span><span class="o">=</span><span class="nv">$1</span> |
| <span class="nb">echo</span> <span class="s2">"Tracking PID: </span><span class="nv">$PID</span><span class="s2">"</span> |
| <span class="nv">DURATION</span><span class="o">=</span>3600 <span class="c"># for 1 hour </span> |
| <span class="nv">INTERVAL</span><span class="o">=</span>5 <span class="c"># every 5 seconds </span> |
| <span class="nb">rm </span>output_mem_usage.log |
| |
| <span class="k">while</span> <span class="o">[</span> <span class="nv">$DURATION</span> <span class="nt">-gt</span> 0 <span class="o">]</span><span class="p">;</span> <span class="k">do |
| </span>ps <span class="nt">-o</span> %mem,rss <span class="nt">-p</span> <span class="nv">$PID</span> | <span class="nb">tail</span> <span class="nt">-n</span> +2 <span class="o">&gt;&gt;</span> output_mem_usage.log |
| <span class="nb">sleep</span> <span class="nv">$INTERVAL</span> |
| <span class="nv">DURATION</span><span class="o">=</span><span class="k">$((</span>DURATION <span class="o">-</span> INTERVAL<span class="k">))</span> |
| <span class="k">done</span> |
| </code></pre></div></div> |
| |
| <p>After compactions have completed plot the data using gnuplot:</p> |
| |
| <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>gnuplot |
| <span class="nb">set </span>title <span class="s2">"Resident Set Size (RSS) Memory usage"</span> |
| <span class="nb">set </span>xlabel <span class="s2">"Time"</span> |
| <span class="nb">set </span>ylabel <span class="s2">"Mem usage in kilobytes"</span> |
| plot <span class="s2">"output_mem_usage.log"</span> using <span class="o">(</span><span class="nv">$0</span><span class="k">*</span>5<span class="o">)</span>:2 with lines title <span class="s1">'Mem usage'</span> |
| </code></pre></div></div> |
| |
| <h2 id="data">Data</h2> |
| |
| <p>Important Notes:</p> |
| <ul> |
| <li>ZGC and G1PeriodicGCInterval are not available with Java 11, so couldn’t be tested for</li> |
| <li>ZGenerational for ZGC is only available in Java 21, so couldn’t be tested for in Java 17</li> |
| <li>G1 GC is the default GC in Java 11, 17, and 21 (doesn’t need to be specified in java args)</li> |
| </ul> |
| |
| <p>All Experiments Performed:</p> |
| |
| <table> |
| <thead> |
| <tr> |
| <th>Java Version</th> |
| <th>Manual Compaction</th> |
| <th>Xmx=1G</th> |
| <th>Xmx=2G</th> |
| <th>Xms=256m</th> |
| <th>XX:G1PeriodicGCInterval=60000</th> |
| <th>XX:-G1PeriodicGCInvokesConcurrent</th> |
| <th>XX:+UseShenandoahGC</th> |
| <th>XX:+UseZGC</th> |
| <th>XX:ZUncommitDelay=120</th> |
| <th>XX:+ZGenerational</th> |
| </tr> |
| </thead> |
| <tbody> |
| <tr> |
| <td>11</td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| </tr> |
| <tr> |
| <td>11</td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| </tr> |
| <tr> |
| <td>11</td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| </tr> |
| <tr> |
| <td>11</td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| </tr> |
| <tr> |
| <td>17</td> |
| <td> </td> |
| <td>🗸</td> |
| <td> </td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| </tr> |
| <tr> |
| <td>17</td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| </tr> |
| <tr> |
| <td>17</td> |
| <td> </td> |
| <td>🗸</td> |
| <td> </td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| </tr> |
| <tr> |
| <td>17</td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| </tr> |
| <tr> |
| <td>17</td> |
| <td> </td> |
| <td>🗸</td> |
| <td> </td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| </tr> |
| <tr> |
| <td>17</td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| </tr> |
| <tr> |
| <td>21</td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| </tr> |
| <tr> |
| <td>21</td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td>🗸</td> |
| </tr> |
| <tr> |
| <td>21</td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| </tr> |
| <tr> |
| <td>21</td> |
| <td> </td> |
| <td>🗸</td> |
| <td> </td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| </tr> |
| <tr> |
| <td>21</td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td>🗸</td> |
| <td> </td> |
| <td> </td> |
| <td> </td> |
| </tr> |
| </tbody> |
| </table> |
| |
| <h3 id="java-11-g1-gc-with-manual-gc-via-visualvm-every-minute-java-args--xmx1g--xms256m">Java 11 G1 GC with manual GC (via VisualVM) every minute. Java args: -Xmx1G -Xms256m</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_11_G1_x1_s256_OS_manualeverymin.png"> |
| <img src="/images/blog/202404_compactor_memory/java_11_G1_x1_s256_OS_manualeverymin.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_11_G1_x1_s256_VM_manualeverymin.png"> |
| <img src="/images/blog/202404_compactor_memory/java_11_G1_x1_s256_VM_manualeverymin.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h3 id="java-11-g1-gc-with-manual-gc-via-visualvm-after-each-compaction-java-args--xmx1g--xms256m">Java 11 G1 GC with manual GC (via VisualVM) after each compaction. Java args: -Xmx1G -Xms256m</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_11_G1_x1_s256_OS_manualaftercomp.png"> |
| <img src="/images/blog/202404_compactor_memory/java_11_G1_x1_s256_OS_manualaftercomp.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_11_G1_x1_s256_VM_manualaftercomp.png"> |
| <img src="/images/blog/202404_compactor_memory/java_11_G1_x1_s256_VM_manualaftercomp.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h3 id="java-11-g1-gc-java-args--xmx2g--xms256">Java 11 G1 GC. Java args: -Xmx2G -Xms256</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_11_G1_x2_s256_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_11_G1_x2_s256_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_11_G1_x2_s256_VM.png"> |
| <img src="/images/blog/202404_compactor_memory/java_11_G1_x2_s256_VM.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h3 id="java-11-shenandoah-gc-java-args--xmx2g--xms256--xxuseshenandoahgc">Java 11 Shenandoah GC. Java args: -Xmx2G -Xms256 -XX:+UseShenandoahGC</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_11_UseShenandoah_x2_s256_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_11_UseShenandoah_x2_s256_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_11_UseShenandoah_x2_s256_VM.png"> |
| <img src="/images/blog/202404_compactor_memory/java_11_UseShenandoah_x2_s256_VM.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h3 id="java-17-g1-gc-java-args--xmx1g--xms256m--xxg1periodicgcinterval60000">Java 17 G1 GC. Java args: -Xmx1G -Xms256m -XX:G1PeriodicGCInterval=60000</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_17_G1_x1_s256_periodic60000_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_17_G1_x1_s256_periodic60000_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_17_G1_x1_s256_periodic60000_VM.png"> |
| <img src="/images/blog/202404_compactor_memory/java_17_G1_x1_s256_periodic60000_VM.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h3 id="java-17-g1-gc-java-args--xmx2g--xms256m--xxg1periodicgcinterval60000">Java 17 G1 GC. Java args: -Xmx2G -Xms256m -XX:G1PeriodicGCInterval=60000</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_17_G1_x2_s256_periodic60000_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_17_G1_x2_s256_periodic60000_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_17_G1_x2_s256_periodic60000_VM.png"> |
| <img src="/images/blog/202404_compactor_memory/java_17_G1_x2_s256_periodic60000_VM.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h3 id="java-17-g1-gc-java-args--xmx1g--xms256m--xxg1periodicgcinterval60000--xx-g1periodicgcinvokesconcurrent">Java 17 G1 GC. Java args: -Xmx1G -Xms256m -XX:G1PeriodicGCInterval=60000 -XX:-G1PeriodicGCInvokesConcurrent</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_17_G1_x1_s256_periodic60000_concurrent_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_17_G1_x1_s256_periodic60000_concurrent_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_17_G1_x1_s256_periodic60000_concurrent_VM.png"> |
| <img src="/images/blog/202404_compactor_memory/java_17_G1_x1_s256_periodic60000_concurrent_VM.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h3 id="java-17-zgc-java-args--xmx2g--xms256m--xxusezgc--xxzuncommitdelay120">Java 17 ZGC. Java args: -Xmx2G -Xms256m -XX:+UseZGC -XX:ZUncommitDelay=120</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_17_ZGC_x2_s256_UseZGC_uncommit_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_17_ZGC_x2_s256_UseZGC_uncommit_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_17_ZGC_x2_s256_UseZGC_uncommit_VM.png"> |
| <img src="/images/blog/202404_compactor_memory/java_17_ZGC_x2_s256_UseZGC_uncommit_VM.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h3 id="java-17-shenandoah-gc-java-args--xmx1g--xms256m--xxuseshenandoahgc">Java 17 Shenandoah GC. Java args: -Xmx1G -Xms256m -XX:+UseShenandoahGC</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_17_shenandoah_x1_s256_UseShenandoah_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_17_shenandoah_x1_s256_UseShenandoah_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_17_shenandoah_x1_s256_UseShenandoah_VM.png"> |
| <img src="/images/blog/202404_compactor_memory/java_17_shenandoah_x1_s256_UseShenandoah_VM.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h3 id="java-17-shenandoah-gc-java-args--xmx2g--xms256m--xxuseshenandoahgc">Java 17 Shenandoah GC. Java args: -Xmx2G -Xms256m -XX:+UseShenandoahGC</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_17_shenandoah_x2_s256_UseShenandoah_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_17_shenandoah_x2_s256_UseShenandoah_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_17_shenandoah_x2_s256_UseShenandoah_VM.png"> |
| <img src="/images/blog/202404_compactor_memory/java_17_shenandoah_x2_s256_UseShenandoah_VM.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h3 id="java-21-g1-gc-java-args--xmx2g--xms256m--xxg1periodicgcinterval60000">Java 21 G1 GC. Java args: -Xmx2G -Xms256m -XX:G1PeriodicGCInterval=60000</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_21_G1_x2_s256_periodic60000_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_21_G1_x2_s256_periodic60000_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_21_G1_x2_s256_periodic60000_VM.png"> |
| <img src="/images/blog/202404_compactor_memory/java_21_G1_x2_s256_periodic60000_VM.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h3 id="java-21-zgc-java-args--xmx2g--xms256m--xxusezgc--xxzgenerational--xxzuncommitdelay120">Java 21 ZGC. Java args: -Xmx2G -Xms256m -XX:+UseZGC -XX:+ZGenerational -XX:ZUncommitDelay=120</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_21_ZGC_x2_s256_UseZGC_generational_uncommit_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_21_ZGC_x2_s256_UseZGC_generational_uncommit_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_21_ZGC_x2_s256_UseZGC_generational_uncommit_VM.png"> |
| <img src="/images/blog/202404_compactor_memory/java_21_ZGC_x2_s256_UseZGC_generational_uncommit_VM.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h3 id="java-21-zgc-java-args--xmx2g--xms256m--xxusezgc--xxzuncommitdelay120">Java 21 ZGC. Java args: -Xmx2G -Xms256m -XX:+UseZGC -XX:ZUncommitDelay=120</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_21_ZGC_x2_s256_UseZGC_uncommit_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_21_ZGC_x2_s256_UseZGC_uncommit_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_21_ZGC_x2_s256_UseZGC_uncommit_VM.png"> |
| <img src="/images/blog/202404_compactor_memory/java_21_ZGC_x2_s256_UseZGC_uncommit_VM.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h3 id="java-21-shenandoah-gc-java-args--xmx1g--xms256m--xxuseshenandoahgc">Java 21 Shenandoah GC. Java args: -Xmx1G -Xms256m -XX:+UseShenandoahGC</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_21_shenandoah_x1_s256_UseShenandoah_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_21_shenandoah_x1_s256_UseShenandoah_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_21_shenandoah_x1_s256_UseShenandoah_VM.png"> |
| <img src="/images/blog/202404_compactor_memory/java_21_shenandoah_x1_s256_UseShenandoah_VM.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h3 id="java-21-shenandoah-gc-java-args--xmx2g--xms256m--xxuseshenandoahgc">Java 21 Shenandoah GC. Java args: -Xmx2G -Xms256m -XX:+UseShenandoahGC</h3> |
| <!-- creates a styled box with two images side by side --> |
| <!-- accepts two URLs relative to the project root and two alt text strings --> |
| <div class="p-3 border rounded d-flex"> |
| <a href="/images/blog/202404_compactor_memory/java_21_shenandoah_x2_s256_UseShenandoah_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_21_shenandoah_x2_s256_UseShenandoah_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a> |
| <a href="/images/blog/202404_compactor_memory/java_21_shenandoah_x2_s256_UseShenandoah_VM.png"> |
| <img src="/images/blog/202404_compactor_memory/java_21_shenandoah_x2_s256_UseShenandoah_VM.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a> |
| </div> |
| |
| <h2 id="conclusion">Conclusion</h2> |
| <p>All the garbage collectors tested (G1 GC, Shenandoah GC, and ZGC) and all the Java versions tested (11, 17, 21) will release memory that is no longer used by a compactor, back to the OS*. Regardless of which GC is used, after an external compaction is done, most (but usually not all) memory is eventually released back to the OS and all memory is released back to the JVM. Although a comparable amount of memory is returned to the OS in each case, the amount of time it takes for the memory to be returned and the amount of memory used during a compaction depends on which garbage collector is used and which parameters are set for the java process.</p> |
| |
| <p>The amount that is never released back to the OS appears to be minimal and may only be present with G1 GC and Shenandoah GC. In the following graph with Java 17 using G1 GC, we see that the baseline OS memory usage before any compactions are done is a bit less than 400MB. We see that after a compaction is done and the garbage collection runs, this baseline settles at about 500MB.</p> |
| |
| <p><a class="p-3 border rounded d-block" href="/images/blog/202404_compactor_memory/java_17_G1_x1_s256_periodic60000_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_17_G1_x1_s256_periodic60000_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a></p> |
| |
| <p>On the same test run, the JVM perspective (pictured in the graph below) shows that all memory is returned (memory usage drops back down to Xms=256m after garbage collection occurs).</p> |
| |
| <p><a class="p-3 border rounded d-block" href="/images/blog/202404_compactor_memory/java_17_G1_x1_s256_periodic60000_VM.png"> |
| <img src="/images/blog/202404_compactor_memory/java_17_G1_x1_s256_periodic60000_VM.png" class="img-fluid rounded" alt="Graph showing memory usage from the JVM perspective" /> |
| </a></p> |
| |
| <p>The roughly 100MB of unreturned memory is also present with Shenandoah GC in Java 17 and Java 21 but does not appear to be present with Java 11. With ZGC, however, we see several runs where nearly all the memory used during a compaction is returned to the OS (the graph below was from a run using ZGC with Java 21). These findings regarding the unreturned memory may or may not be significant. They may also be the result of variance between runs. More testing would need to be done to confirm or deny these claims.</p> |
| |
| <p><a class="p-3 border rounded d-block" href="/images/blog/202404_compactor_memory/java_21_ZGC_x2_s256_UseZGC_generational_uncommit_OS.png"> |
| <img src="/images/blog/202404_compactor_memory/java_21_ZGC_x2_s256_UseZGC_generational_uncommit_OS.png" class="img-fluid rounded" alt="Graph showing memory usage from the OS perspective" /> |
| </a></p> |
| |
| <p>Another interesting finding was that the processes use more memory when more is allocated. These results were obtained from initiating a compaction of 700MB of data (see experiment.jsh script). For example, setting 2GB versus 1GB of max heap for the compactor process results in a higher peak memory usage. During a compaction, when only allocated 1GB of heap space, the max heap space is not completely utilized. When allocated 2GB, compactions exceed 1GB of heap space used. It appears that G1 GC and ZGC use the least amount of heap space during a compaction (maxing out around 1.5GB and when using ZGC with ZGeneration in Java 21, this maxes out around 1.7GB). Shenandoah GC appears to use the most heap space during a compaction with a max heap space around 1.9GB (for Java 11, 17, and 21). However, these differences might be due to differences between outside factors during runs and more testing may need to be done to confirm or deny these claims.</p> |
| |
| <p>Another difference found between the GCs tested was that Shenandoah GC sometimes required two garbage collections to occur after a compaction completed to clean up the memory. Based on our experiments, when a larger max heap size was allocated (2GB vs 1GB), the first garbage collection that occurred only cleaned up about half of the now unused memory, and another garbage collection had to occur for the rest to be cleaned up. This was not the case when 1GB of max heap space was allocated (almost all of the unused memory was cleaned up on the first garbage collection, with the rest being cleaned up on the next garbage collection). G1 GC and ZGC always cleaned up the majority of the memory on the first garbage collection.</p> |
| |
| <p>*Note: When using the default GC (G1 GC), garbage collection does not automatically occur unless further garbage collection settings are specified (e.g., G1PeriodicGCInterval)</p> |
| </description> |
| <pubDate>Tue, 09 Apr 2024 00:00:00 +0000</pubDate> |
| <link>https://accumulo.apache.org/blog/2024/04/09/does-a-compactor-return-memory-to-OS.html</link> |
| <guid isPermaLink="true">https://accumulo.apache.org/blog/2024/04/09/does-a-compactor-return-memory-to-OS.html</guid> |
| |
| |
| <category>blog</category> |
| |
| </item> |
| |
| <item> |
| <title>Apache Accumulo 1.10.4</title> |
| <description><h2 id="about">About</h2> |
| |
| <p>Apache Accumulo 1.10.4 is the final bug fix release of the 1.10 LTM release |
| line. As of this release, the 1.10 release line is now considered end-of-life. |
| This means that any fixes that are applied because of a bug found in this |
| version will not be applied and released as a new 1.10 patch version, but |
| instead will be applied and released to the currently active release lines, if |
| they apply to those versions.</p> |
| |
| <p>These release notes are highlights of the changes since 1.10.3. The full |
| detailed changes can be seen in the git history. If anything important is |
| missing from this list, please <a href="/contact-us">contact</a> us to have it included.</p> |
| |
| <p>Users of any 1.10 version are encouraged to upgrade to the next LTM release, |
| which is 2.1 at the time of this writing. This patch release is provided as a |
| final release with all the patches the developers have made to 1.10, for |
| anybody who must remain using 1.10, and who want to upgrade from an earlier 1.x |
| version.</p> |
| |
| <h2 id="known-issues">Known Issues</h2> |
| |
| <p>Apache Commons VFS was upgraded in <a href="https://github.com/apache/accumulo/issues/1295">#1295</a> for 1.10.0 and some users have reported |
| issues similar to <a href="https://issues.apache.org/jira/projects/VFS/issues/VFS-683">VFS-683</a>. Possible solutions are discussed in <a href="https://github.com/apache/accumulo/issues/2775">#2775</a>. |
| This issue is applicable to all 1.10 versions.</p> |
| |
| <h2 id="major-improvements">Major Improvements</h2> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/3391">#3391</a> Drop support for MapFile file formats as an alternative to |
| RFile; the use of MapFiles was already broken, and had been for a long time. |
| So this change was done to cause an explicit and detectable failure, rather |
| than allow a silent one to occur if a MapFile was attempted to be used.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3703">#3703</a> Add verification checks to improve the reliability of the |
| accumulo-gc, in order to ensure that a full row for a tablet was seen when a |
| file deletion candidate is checked</li> |
| </ul> |
| |
| <h3 id="other-improvements">Other Improvements</h3> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/3300">#3300</a> Fix the documentation about iterator teardown in the user manual</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3343">#3343</a> Fix errors in the javadoc for Range</li> |
| </ul> |
| |
| <h2 id="note-about-jdk-15">Note About JDK 15</h2> |
| |
| <p>See the note in the 1.10.1 release notes about the use of JDK 15 or later, as |
| the information pertaining to the use of the CMS garbage collector remains |
| applicable to all 1.10 releases.</p> |
| |
| <h2 id="useful-links">Useful Links</h2> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/compare/rel/1.10.3...apache:rel/1.10.4">All Changes since 1.10.3</a></li> |
| <li><a href="https://github.com/apache/accumulo/issues?q=%20project%3Aapache%2Faccumulo%2F27">GitHub</a> - List of issues tracked on GitHub corresponding to this release</li> |
| </ul> |
| |
| </description> |
| <pubDate>Thu, 16 Nov 2023 00:00:00 +0000</pubDate> |
| <link>https://accumulo.apache.org/release/accumulo-1.10.4/</link> |
| <guid isPermaLink="true">https://accumulo.apache.org/release/accumulo-1.10.4/</guid> |
| |
| |
| <category>release</category> |
| |
| </item> |
| |
| <item> |
| <title>Apache Accumulo 3.0.0</title> |
| <description><h2 id="about">About</h2> |
| |
| <p>Apache Accumulo 3.0.0 is a non-LTM major version release. While it |
| primarily contains the 2.1 codebase, including all patches through |
| 2.1.2, it has also removed a substantial number of deprecated features |
| and code, in an attempt to clean up several years of accrued technical |
| debt, and lower the maintenance burden to make way for future |
| improvements. It also contains a few other minor improvements.</p> |
| |
| <h2 id="notable-removals">Notable Removals</h2> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/1328">#1328</a> The FileSystem monitor has been removed and will no |
| longer watch for problems with local file systems and self-terminate. |
| System administrators are encouraged to use whatever systems health |
| monitoring is appropriate for their deployments, rather than depend on |
| Accumulo to monitor these.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/2443">#2443</a> The MapReduce APIs embedded in the accumulo-core module |
| were removed. The separate <code class="language-plaintext highlighter-rouge">accumulo-hadoop-mapreduce</code> jar is their |
| replacement.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3073">#3073</a> The legacy Connector and Instance client classes were removed. |
| The AccumuloClient is their replacement.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3080">#3080</a> The cross-data center replication feature was removed without |
| replacement due to lack of being maintained, having numerous outstanding |
| unfixed issues with no volunteer to maintain it since it was deprecated, and |
| substantial code complexity. The built-in replication table it used for |
| tracking replication metadata will be removed on upgrade.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3114">#3114</a>, <a href="https://github.com/apache/accumulo/issues/3115">#3115</a>, <a href="https://github.com/apache/accumulo/issues/3116">#3116</a>, <a href="https://github.com/apache/accumulo/issues/3117">#3117</a> Removed |
| deprecated VolumeChooser, TabletBalancer, Constraint, and other APIs, in |
| favor of their SPI replacements.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3106">#3106</a> Remove deprecated configuration properties (see 2.1 property |
| documentation for which ones were deprecated)</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3112">#3112</a> Remove CompactionStrategy class in favor of CompactionSelector |
| and CompactionConfigurer.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3160">#3160</a> Remove upgrade code for versions prior to 2.1 (minimum version |
| to upgrade from is now 2.1.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3192">#3192</a> Remove arguments to server processes, such as (<code class="language-plaintext highlighter-rouge">-a</code>, <code class="language-plaintext highlighter-rouge">-g</code>, |
| <code class="language-plaintext highlighter-rouge">-q</code>, etc.) were removed in favor of configuration properties that can be |
| specified in the Accumulo configuration files or supplied on a per-process |
| basis using the <code class="language-plaintext highlighter-rouge">-o</code> argument. The provided cluster management reference |
| scripts were updated in <a href="https://github.com/apache/accumulo/issues/3197">#3197</a> to use the <code class="language-plaintext highlighter-rouge">-o</code> method.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3136">#3136</a> Remove the built-in VFS classloader support. To use a custom |
| classloader, users must now set the ContextClassLoaderFactory implementation |
| in the properties. The default is now the URLContextClassLoaderFactory.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3318">#3318</a> Remove the old bulk import implementation, replaced by the new |
| bulk import API added <a href="https://accumulo.apache.org/release/accumulo-2.0.0/#new-bulk-import-api">in 2.0.0</a>.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3265">#3265</a> Remove scan interpreter and scan formatter from the shell</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3361">#3361</a> Remove all remaining references to the old “master” service |
| (renamed to “manager”).</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3360">#3360</a> Remove checks and code related to the old password hashing |
| mechanism in Accumulo. This will discontinue warnings about users passwords |
| that are still out of date. Instead, those outdated passwords will simply |
| become invalid. If the user authenticated to Accumulo at any time prior to |
| upgrading, their password will have been converted. So this only affects |
| accounts that were never used with 2.1 at all. As mitigation, such users will |
| be able to have their password reset by the root user. If the root user never |
| authenticated (and neither had another admin user) while on 2.1 (very very |
| unlikely), an administrator can reset the entire user database through the |
| normal init step to reset security.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3378">#3378</a> Remove broken support for old map files. (RFiles have been in |
| use for a long time, so this should not impact any users; if users had been |
| trying to use map files, they would have found that they were broken anyway)</li> |
| </ul> |
| |
| <h2 id="notable-additions">Notable Additions</h2> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/3088">#3088</a> New methods were added to compaction-related APIs to share |
| information about the current tablet being compacted to user code</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3107">#3107</a> Decompose internal thrift services by function to make RPC |
| functionality more modular by server instances</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3189">#3189</a> Standardized server lock data structure in ZooKeeper</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3206">#3206</a> Internal caches now use Caffeine instead of Guava’s Cache</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3161">#3161</a>, <a href="https://github.com/apache/accumulo/issues/3288">#3288</a> The internal service (renamed from |
| GarbageCollectionLogger to LowMemoryDetector) that was previously used only |
| to report low memory in servers, was made configurable to allow pausing |
| certain operations like scanning, minor compactions, or major compactions, |
| when memory is low. See the server properties for <code class="language-plaintext highlighter-rouge">general.low.mem.*</code>.</li> |
| </ul> |
| |
| <h2 id="upgrading">Upgrading</h2> |
| |
| <p>View the <a href="/docs/2.x/administration/upgrading">Upgrading Accumulo documentation</a> for guidance.</p> |
| |
| <h2 id="300-github-project">3.0.0 GitHub Project</h2> |
| |
| <p><a href="https://github.com/apache/accumulo/projects/11">All tickets related to 3.0.0.</a></p> |
| |
| </description> |
| <pubDate>Mon, 21 Aug 2023 00:00:00 +0000</pubDate> |
| <link>https://accumulo.apache.org/release/accumulo-3.0.0/</link> |
| <guid isPermaLink="true">https://accumulo.apache.org/release/accumulo-3.0.0/</guid> |
| |
| |
| <category>release</category> |
| |
| </item> |
| |
| <item> |
| <title>Apache Accumulo 2.1.2</title> |
| <description><h2 id="about">About</h2> |
| |
| <p>Apache Accumulo 2.1.2 is a patch release of the 2.1 LTM line. It contains bug |
| fixes and minor enhancements. This version supersedes 2.1.1. Users upgrading to |
| 2.1 should upgrade directly to this version instead of 2.1.1.</p> |
| |
| <p>Included here are some highlights of the most interesting bugs fixed and |
| features added in 2.1.2. For the full set of changes, please see the commit |
| history or issue tracker.</p> |
| |
| <h3 id="notable-improvements">Notable Improvements</h3> |
| |
| <p>Improvements that affect performance:</p> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/3499">#3499</a>, <a href="https://github.com/apache/accumulo/issues/3543">#3543</a>, <a href="https://github.com/apache/accumulo/issues/3549">#3549</a>, <a href="https://github.com/apache/accumulo/issues/3500">#3500</a>, <a href="https://github.com/apache/accumulo/issues/3509">#3509</a> |
| Made some optimizations around the processing of file references in the |
| accumulo-gc code, including optimizing a constructor in a class called |
| <code class="language-plaintext highlighter-rouge">TabletFile</code> used to track file references.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3541">#3541</a>, <a href="https://github.com/apache/accumulo/issues/3542">#3542</a> Added a new property, |
| <a href="/docs/2.x/configuration/server-properties#manager_tablet_watcher_interval">manager.tablet.watcher.interval</a>, to make the time to wait between |
| scanning the metadata table for outstanding tablet actions (such as assigning |
| tablets, etc.) to be configurable.</li> |
| </ul> |
| |
| <p>Improvements that help with administration:</p> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/3678">#3678</a>, <a href="https://github.com/apache/accumulo/issues/3683">#3683</a> Added extra validation of property |
| <a href="/docs/2.x/configuration/server-properties#table_class_loader_context">table.class.loader.context</a> at the time it is set, to prevent |
| invalid contexts from being set on a table.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3548">#3548</a>, <a href="https://github.com/apache/accumulo/issues/3561">#3561</a> Added a banner to the manager page in the |
| Monitor that displays the manager state and goal state when they are not |
| normal.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3383">#3383</a>, <a href="https://github.com/apache/accumulo/issues/3680">#3680</a> Prompt the user for confirmation when they |
| attempt to set a deprecated property in the Shell as a way to get them to use |
| the non-deprecated property.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3233">#3233</a>, <a href="https://github.com/apache/accumulo/issues/3562">#3562</a> Add option to <code class="language-plaintext highlighter-rouge">--exclude-parent</code> to allow |
| creating a table or namespace in the shell initialized with only the |
| properties set on another table or namespace, but not those the other table |
| or namespace were inheriting from their parent.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3600">#3600</a> Normalized metric labels and structure.</li> |
| </ul> |
| |
| <h3 id="notable-bug-fixes">Notable Bug Fixes</h3> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/3488">#3488</a>, <a href="https://github.com/apache/accumulo/issues/3612">#3612</a> Fixed sorting of some columns on the monitor</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3674">#3674</a>, <a href="https://github.com/apache/accumulo/issues/3677">#3677</a>, <a href="https://github.com/apache/accumulo/issues/3685">#3685</a> Prevent an invalid table |
| context and other errors from killing the minor compaction thread and |
| preventing a tablet from being closed and shutting down normally.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3630">#3630</a>, <a href="https://github.com/apache/accumulo/issues/3631">#3631</a> Fix a bug where BatchWriter latency and |
| timeout values were incorrectly converted to the wrong time unit..</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3617">#3617</a>, <a href="https://github.com/apache/accumulo/issues/3622">#3622</a> Close LocalityGroupReader when IOException is |
| thrown to release reference to a possibly corrupted stream in a cached block |
| file.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3570">#3570</a>, <a href="https://github.com/apache/accumulo/issues/3571">#3571</a> Fixed the TabletGroupWatcher shutdown order.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3569">#3569</a>, <a href="https://github.com/apache/accumulo/issues/3579">#3579</a> <a href="https://github.com/apache/accumulo/issues/3644">#3644</a> Changes to ensure that scan |
| sessions are cleaned up.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3553">#3553</a>, <a href="https://github.com/apache/accumulo/issues/3555">#3555</a> A bug where a failed user compaction would not |
| retry and would hang was fixed.</li> |
| </ul> |
| |
| <h3 id="other-notable-changes">Other Notable Changes</h3> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/3550">#3550</a> The contents of the contrib directory have been moved to more |
| appropriate locations for build-related resources</li> |
| </ul> |
| |
| <h2 id="upgrading">Upgrading</h2> |
| |
| <p>View the <a href="/docs/2.x/administration/upgrading">Upgrading Accumulo documentation</a> for guidance.</p> |
| |
| <h2 id="212-github-project">2.1.2 GitHub Project</h2> |
| |
| <p><a href="https://github.com/apache/accumulo/projects/29">All tickets related to 2.1.2.</a></p> |
| |
| </description> |
| <pubDate>Mon, 21 Aug 2023 00:00:00 +0000</pubDate> |
| <link>https://accumulo.apache.org/release/accumulo-2.1.2/</link> |
| <guid isPermaLink="true">https://accumulo.apache.org/release/accumulo-2.1.2/</guid> |
| |
| |
| <category>release</category> |
| |
| </item> |
| |
| <item> |
| <title>Apache Accumulo 2.1.1</title> |
| <description><h2 id="about">About</h2> |
| |
| <p>Apache Accumulo 2.1.1 is a patch release of the 2.1 LTM line. It contains |
| many bug fixes and minor enhancements, including a critical fix. This version |
| supersedes 2.1.0. Users upgrading to 2.1 should upgrade directly to this |
| version instead of 2.1.0.</p> |
| |
| <p>Included here are some highlights of the most interesting bugs and features |
| fixed in 2.1.1. Several trivial bugs were also fixed that related to the |
| presentation of information on the monitor, or to avoid spammy/excessive |
| logging, but are too numerous to list here. For the full set of bug fixes, |
| please see the commit history or issue tracker.</p> |
| |
| <p>NOTE: This 2.1 release also includes any applicable bug fixes and improvements |
| that occurred in 1.10.3 and earlier.</p> |
| |
| <h3 id="critical-fixes">Critical Fixes</h3> |
| |
| <ul> |
| <li><a href="https://www.cve.org/CVERecord?id=CVE-2023-34340">CVE-2023-34340</a> Fixed a critical issue that improperly allowed a user under |
| some conditions to authenticate to Accumulo using an invalid password.</li> |
| </ul> |
| |
| <h3 id="notable-improvements">Notable Improvements</h3> |
| |
| <p>Improvements that add capabilities:</p> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/3180">#3180</a> Enable users to provide per-volume Hadoop Filesystem |
| configuration overrides via the Accumulo configuration. Hadoop Filesystem |
| objects are configured by the standard Hadoop mechanisms (default |
| configuration, core-site.xml, hdfs-site.xml, etc.), but these configuration |
| files don’t allow for the same property to be specified with different values |
| for different namespaces. This change allows users to specify different |
| property values for different Accumulo volumes, which will be applied to the |
| Hadoop Filesystem object created for each Accumulo volume</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1169">#1169</a>, <a href="https://github.com/apache/accumulo/issues/3142">#3142</a> Add configuration option for users to select |
| how the last location field is used, so users have better control over |
| initial assignments on restarts</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3400">#3400</a> Inject environment injected into ContextClassLoaderFactory SPI |
| so implementations can read and make use of Accumulo’s own configuration</li> |
| </ul> |
| |
| <p>Improvements that affect performance:</p> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/3175">#3175</a> Reset number of locks in SynchronousLoadingBlockCache from |
| 2017 back to 5003, the value that it was in 1.10. <a href="https://github.com/apache/accumulo/issues/3226">#3226</a> Also, |
| modified the lock to be fair, which allows the different scan threads in the |
| server to make progress in a more fair manner when they need to load a block |
| into the cache</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3077">#3077</a>, <a href="https://github.com/apache/accumulo/issues/3079">#3079</a>, <a href="https://github.com/apache/accumulo/issues/3083">#3083</a>, <a href="https://github.com/apache/accumulo/issues/3123">#3123</a> Avoid filling |
| OS page cache by calling <code class="language-plaintext highlighter-rouge">setDropBehind</code> on the FS data stream when |
| performing likely one-time file accesses, as with WAL and compaction input |
| and output files. This should allow files that might benefit more from |
| caching to stay in the cache longer. <a href="https://github.com/apache/accumulo/issues/3083">#3083</a> and <a href="https://github.com/apache/accumulo/issues/3123">#3123</a> |
| introduces new properties, table.compaction.major.output.drop.cache and |
| table.compaction.minor.output.drop.cache, for dropping pages from the OS page |
| cache for compaction output files. These changes will only have an impact on |
| HDFS FileSystem implementations and operating systems that support the |
| underlying OS system call. See associated issue, <a href="https://issues.apache.org/jira/browse/HDFS-16864">HDFS-16864</a>, that will |
| improve the underlying implementation when resolved.</li> |
| </ul> |
| |
| <p>Improvements that help with administration:</p> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/3445">#3445</a> Add emergency maintenance utility to edit properties in |
| ZooKeeper while the Accumulo cluster is shut down</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3118">#3118</a> Added option to the <code class="language-plaintext highlighter-rouge">admin zoo-info-viewer</code> command to dump |
| the ACLs on ZooKeeper nodes. This information can be used to fix znodes with |
| incorrect ACLs during the upgrade process</li> |
| </ul> |
| |
| <p>Other notable changes:</p> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/3126">#3126</a> Remove unintentionally bundled htrace4 from our packaging; |
| users will need to provide that for themselves if they require it on their |
| classpath</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3436">#3436</a> Deprecate gc.trash.ignore property. The trash can be |
| customized within Hadoop if one wishes to ignore it, or configure it to be |
| ignored for only specific files (and this has been tested with recent |
| versions of Hadoop); In version 3.0, this property will be removed, and it |
| will no longer be possible to ignore the trash by changing this property</li> |
| </ul> |
| |
| <h3 id="notable-bug-fixes">Notable Bug Fixes</h3> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/3134">#3134</a> Fixed Thrift issues due to incorrect setting of maxMessageSize</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3144">#3144</a>, <a href="https://github.com/apache/accumulo/issues/3150">#3150</a>, <a href="https://github.com/apache/accumulo/issues/3164">#3164</a> Fixed bugs in ScanServer that |
| prevented a tablet from being scanned when some transient failures occurred</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3346">#3346</a>, <a href="https://github.com/apache/accumulo/issues/3366">#3366</a> Fixed tablet metadata verification task so it |
| doesn’t unintentionally cause the server to halt</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3479">#3479</a> Fixed issue preventing servers from shutting down because they |
| were still receiving assignments</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3492">#3492</a> Fixed a bug where bulk imports could cause compactions to hang</li> |
| </ul> |
| |
| <h2 id="upgrading">Upgrading</h2> |
| |
| <p>View the <a href="/docs/2.x/administration/upgrading">Upgrading Accumulo documentation</a> for guidance.</p> |
| |
| <h2 id="211-github-project">2.1.1 GitHub Project</h2> |
| |
| <p><a href="https://github.com/apache/accumulo/projects/25">All tickets related to 2.1.1.</a></p> |
| |
| </description> |
| <pubDate>Mon, 19 Jun 2023 00:00:00 +0000</pubDate> |
| <link>https://accumulo.apache.org/release/accumulo-2.1.1/</link> |
| <guid isPermaLink="true">https://accumulo.apache.org/release/accumulo-2.1.1/</guid> |
| |
| |
| <category>release</category> |
| |
| </item> |
| |
| <item> |
| <title>Apache Accumulo 1.10.3</title> |
| <description><h2 id="about">About</h2> |
| |
| <p>Apache Accumulo 1.10.3 is a bug fix release of the 1.10 LTM release line.</p> |
| |
| <p>These release notes are highlights of the changes since 1.10.2. The full |
| detailed changes can be seen in the git history. If anything important is |
| missing from this list, please <a href="/contact-us">contact</a> us to have it included.</p> |
| |
| <p>Users of 1.10.2 or earlier are encouraged to upgrade to 1.10.3, as this is a |
| continuation of the 1.10 LTM release line with bug fixes and improvements, and |
| it supersedes any prior 1.x version. Users are also encouraged to consider |
| migrating to a 2.x version when one that is suitable for their needs becomes |
| available.</p> |
| |
| <h2 id="known-issues">Known Issues</h2> |
| |
| <p>Apache Commons VFS was upgraded in <a href="https://github.com/apache/accumulo/issues/1295">#1295</a> for 1.10.0 and some users have reported |
| issues similar to <a href="https://issues.apache.org/jira/projects/VFS/issues/VFS-683">VFS-683</a>. Possible solutions are discussed in <a href="https://github.com/apache/accumulo/issues/2775">#2775</a>. |
| This issue is applicable to all 1.10 versions.</p> |
| |
| <h2 id="major-improvements">Major Improvements</h2> |
| |
| <p>None</p> |
| |
| <h3 id="other-improvements">Other Improvements</h3> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/2708">#2708</a> Disabled merging minor-compactions by default</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3226">#3226</a> Change scan thread resource management to use a “fair” |
| semaphore to avoid resource starvation in some situations</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3221">#3221</a>, <a href="https://github.com/apache/accumulo/issues/3249">#3249</a>, <a href="https://github.com/apache/accumulo/issues/3261">#3261</a> Improve some performance by |
| improving split point calculations</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3276">#3276</a> Improve performance by optimizing internal data structures in |
| frequently used Authorizations object</li> |
| </ul> |
| |
| <h3 id="other-bug-fixes">Other Bug Fixes</h3> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/3069">#3069</a> Fix a minor bug with VFS on newer Java versions due to |
| MIME-type changes</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3176">#3176</a> Fixed bug in client scanner code that was not using the |
| correct timeout variable in some places</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3168">#3168</a> Fixed bug in TabletLocator that could cause the BatchScanner |
| to return duplicate data</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3231">#3231</a>, <a href="https://github.com/apache/accumulo/issues/3235">#3235</a> Fix wait timeout logic when waiting for |
| minimum number of available tservers during startup</li> |
| </ul> |
| |
| <h2 id="note-about-jdk-15">Note About JDK 15</h2> |
| |
| <p>See the note in the 1.10.1 release notes about the use of JDK 15 or later, as |
| the information pertaining to the use of the CMS garbage collector remains |
| applicable to all 1.10 releases.</p> |
| |
| <h2 id="useful-links">Useful Links</h2> |
| |
| <ul> |
| <li><a href="https://lists.apache.org/thread/zl8xoogzqnbcw75vcmvqmwlrf8djfcb5">Release VOTE email thread</a></li> |
| <li><a href="https://github.com/apache/accumulo/compare/rel/1.10.2...apache:rel/1.10.3">All Changes since 1.10.2</a></li> |
| <li><a href="https://github.com/apache/accumulo/issues?q=%20project%3Aapache%2Faccumulo%2F23">GitHub</a> - List of issues tracked on GitHub corresponding to this release</li> |
| </ul> |
| |
| </description> |
| <pubDate>Thu, 13 Apr 2023 00:00:00 +0000</pubDate> |
| <link>https://accumulo.apache.org/release/accumulo-1.10.3/</link> |
| <guid isPermaLink="true">https://accumulo.apache.org/release/accumulo-1.10.3/</guid> |
| |
| |
| <category>release</category> |
| |
| </item> |
| |
| <item> |
| <title>Apache Accumulo 2.1.0</title> |
| <description><h2 id="about">About</h2> |
| |
| <p>Apache Accumulo 2.1.0 brings many new features and updates since 1.10 and 2.0. The 2.1 release |
| series is an LTM series, and as such, is expected to receive stability-improving bugfixes, as |
| needed. This makes this series suitable for production environments where stability is preferable |
| over new features that might appear in subsequent non-LTM releases.</p> |
| |
| <p>This release has received more than 1200 commits from over 50 contributors, including numerous |
| bugfixes, updates, and features.</p> |
| |
| <h2 id="minimum-requirements">Minimum Requirements</h2> |
| |
| <p>This version of Accumulo requires at least Java 11 to run. Various Java 11 versions from different |
| distributors were used throughout its testing and development, so we expect it to work with any |
| standard OpenJDK-based Java distribution.</p> |
| |
| <p>At least Hadoop 3 is required, though it is recommended to use a more recent version. Version 3.3 |
| was used extensively during testing, but we have no specific knowledge that an earlier version of |
| Hadoop 3 will not work. Whichever major/minor version you use, it is recommended to use the latest |
| bugfix/patch version available. By default, our POM depends on 3.3.4.</p> |
| |
| <p>During much of this release’s development, ZooKeeper 3.5 was used as a minimum. However, that |
| version reach its end-of-life during development, and we do not recommend using end-of-life versions |
| of ZooKeeper. The latest bugfix version of 3.6, 3.7, or 3.8 should also work fine. By default, our |
| POM depends on 3.8.0.</p> |
| |
| <h2 id="binary-incompatibility">Binary Incompatibility</h2> |
| |
| <p>This release is known to be incompatible with prior versions of the client libraries. That is, the |
| 2.0.0 or 2.0.1 version of the client libraries will not be able to communicate with a 2.1.0 or later |
| installation of Accumulo, nor will the 2.1.0 or later version of the client libraries communicate |
| with a 2.0.1 or earlier installation.</p> |
| |
| <h2 id="major-new-features">Major New Features</h2> |
| |
| <h3 id="overhaul-of-table-compactions">Overhaul of Table Compactions</h3> |
| |
| <p>Significant changes were made to how Accumulo compacts files in this release. See |
| <a href="/docs/2.x/administration/compaction">compaction </a> for details, below are some highlights.</p> |
| |
| <ul> |
| <li>Multiple concurrent compactions per tablet on disjoint files is now supported. Previously only a |
| single compaction could run on a tablet. This allows tablets that are running long compactions |
| on large files to concurrently compact new smaller files that arrive.</li> |
| <li>Multiple compaction thread pools per tablet server are now supported. Previously only a single |
| thread pool existed within a tablet server for compactions. With a single thread pool, if all |
| threads are working on long compactions it can starve quick compactions. Now compactions with |
| little data can be processed by dedicated thread pools.</li> |
| <li>Accumulo’s default algorithm for selecting files to compact was modified to select the smallest |
| set of files that meet the compaction ratio criteria instead of the largest set. This change |
| makes tablets more aggressive about reducing their number files while still doing logarithmic |
| compaction work. This change also enables efficiently compacting new small files that arrive |
| during a long running compaction.</li> |
| <li>Having dedicated compaction threads pools for tables is now supported through configuration. The |
| default configuration for Accumulo sets up dedicated thread pools for compacting the Accumulo |
| metadata table.</li> |
| <li>Merging minor compactions were dropped. These were added to Accumulo to address the problem of |
| new files arriving while a long running compaction was running. Merging minor compactions could |
| cause O(N^2) compaction work. The new compaction changes in this release can satisfy this use |
| case while doing a logarithmic amount of work.</li> |
| </ul> |
| |
| <p>CompactionStrategy was deprecated in favor of new public APIs. CompactionStrategy was never public |
| API as it used internal types and one of these types <code class="language-plaintext highlighter-rouge">FileRef</code> was removed in 2.1. Users who have |
| written a CompactionStrategy can replace <code class="language-plaintext highlighter-rouge">FileRef</code> with its replacement internal type |
| <code class="language-plaintext highlighter-rouge">StoredTabletFile</code> but this is not recommended. Since it is very likely that CompactionStrategy will |
| be removed in a future release, any work put into rewriting a CompactionStrategy will be lost. It is |
| recommended that users implement CompactionSelector, CompactionConfigurer, and CompactionPlanner |
| instead. The new compaction changes in 2.1 introduce new algorithms for optimally scheduling |
| compactions across multiple thread pools, configuring a deprecated compaction strategy may result is |
| missing out on the benefits of these new algorithms.</p> |
| |
| <p>See the <a href="https://static.javadoc.io/org.apache.accumulo/accumulo-tserver/2.1.2/org/apache/accumulo/tserver/compaction/CompactionStrategy.html">javadoc</a> for more |
| information.</p> |
| |
| <p>GitHub tickets related to these changes: <a href="https://github.com/apache/accumulo/issues/564">#564</a> <a href="https://github.com/apache/accumulo/issues/1605">#1605</a> <a href="https://github.com/apache/accumulo/issues/1609">#1609</a> <a href="https://github.com/apache/accumulo/issues/1649">#1649</a></p> |
| |
| <h3 id="external-compactions-experimental">External Compactions (experimental)</h3> |
| |
| <p>This feature includes two new optional server components, CompactionCoordinator and Compactor, that |
| enables the user to run major compactions outside of the TabletServer. See <a href="/docs/2.x/getting-started/design">design </a>, <a href="/docs/2.x/administration/compaction">compaction </a>, and the External Compaction <a href="/blog/2021/07/08/external-compactions.html">blog |
| post</a> for more information. This work was completed over many tickets, see the GitHub |
| <a href="https://github.com/apache/accumulo/projects/20">project</a> for the related issues. <a href="https://github.com/apache/accumulo/issues/2096">#2096</a></p> |
| |
| <h3 id="scan-servers-experimental">Scan Servers (experimental)</h3> |
| |
| <p>This feature includes a new optional server component, Scan Server, that enables the user to run |
| scans outside of the TabletServer. See <a href="/docs/2.x/getting-started/design">design </a>, |
| <a href="https://github.com/apache/accumulo/issues/2411">#2411</a>, and <a href="https://github.com/apache/accumulo/issues/2665">#2665</a> for more information. Importantly, users can utilize this |
| feature to avoid bogging down the TabletServer with long-running scans, slow iterators, etc., |
| provided they are willing to tolerate eventual consistency.</p> |
| |
| <h3 id="new-per-table-on-disk-encryption-experimental">New Per-Table On-Disk Encryption (experimental)</h3> |
| |
| <p>On-disk encryption can now be configured on a per table basis as well as for the entire instance |
| (all tables). See <a href="/docs/2.x/security/on-disk-encryption">on-disk-encryption </a> for more information.</p> |
| |
| <h3 id="new-jshell-entry-point">New jshell entry point</h3> |
| |
| <p>Created new “jshell” convenience entry point. Run <code class="language-plaintext highlighter-rouge">bin/accumulo jshell</code> to start up jshell, |
| preloaded with Accumulo classes imported and with an instance of AccumuloClient already created for |
| you to connect to Accumulo (assuming you have a client properties file on the class path) <a href="https://github.com/apache/accumulo/issues/1870">#1870</a> <a href="https://github.com/apache/accumulo/issues/1910">#1910</a></p> |
| |
| <h2 id="major-improvements">Major Improvements</h2> |
| |
| <h3 id="fixed-gc-metadata-hotspots">Fixed GC Metadata hotspots</h3> |
| |
| <p>Prior to this release, Accumulo stored GC file candidates in the metadata table using rows of the |
| form <code class="language-plaintext highlighter-rouge">~del&lt;URI&gt;</code>. This row schema lead to uneven load on the metadata table and metadata tablets |
| that were eventually never used. In <a href="https://github.com/apache/accumulo/issues/1043">#1043</a> / <a href="https://github.com/apache/accumulo/issues/1344">#1344</a>, the row format was changed to |
| <code class="language-plaintext highlighter-rouge">~del&lt;hash(URI)&gt;&lt;URI&gt;</code> resulting in even load on the metadata table and even data spread in the |
| tablets. After upgrading, there may still be splits in the metadata table using the old row format. |
| These splits can be merged away as shown in the example below which starts off with splits generated |
| from the old and new row schema. The old splits with the prefix <code class="language-plaintext highlighter-rouge">~delhdfs</code> are merged away.</p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@uno&gt; getsplits -t accumulo.metadata |
| 2&lt; |
| ~ |
| ~del55 |
| ~dela7 |
| ~delhdfs://localhost:8020/accumulo/tables/2/default_tablet/F00000a0.rf |
| ~delhdfs://localhost:8020/accumulo/tables/2/default_tablet/F00000kb.rf |
| root@uno&gt; merge -t accumulo.metadata -b ~delhdfs -e ~delhdfs~ |
| root@uno&gt; getsplits -t accumulo.metadata |
| 2&lt; |
| ~ |
| ~del55 |
| ~dela7 |
| </code></pre></div></div> |
| |
| <h3 id="master-renamed-to-manager">Master Renamed to Manager</h3> |
| |
| <p>In order to use more inclusive language in our code, the Accumulo team has renamed all references to |
| the word “master” to “manager” (with the exception of deprecated classes and packages retained for |
| compatibility). This change includes the master server process, configuration properties with master |
| in the name, utilities with master in the name, and packages/classes in the code base. Where these |
| changes affect the public API, the deprecated “master” name will still be supported until Accumulo |
| 3.0.</p> |
| |
| <blockquote> |
| <p><strong>Important</strong> |
| One particular change to be aware of is that certain state for the manager process is stored in |
| ZooKeeper, previously in under a directory named <code class="language-plaintext highlighter-rouge">masters</code>. This directory has been renamed to |
| <code class="language-plaintext highlighter-rouge">managers</code>, and the upgrade will happen automatically if you launch Accumulo using the provided |
| scripts. However, if you do not use the built in scripts (e.g., accumulo-cluster or |
| accumulo-service), then you will need to perform a one-time upgrade of the ZooKeeper state by |
| executing the <code class="language-plaintext highlighter-rouge">RenameMasterDirInZK</code> utility:</p> |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> ${ACCUMULO_HOME}/bin/accumulo org.apache.accumulo.manager.upgrade.RenameMasterDirInZK |
| </code></pre></div> </div> |
| </blockquote> |
| |
| <p>Some other specific examples of these changes include:</p> |
| |
| <ul> |
| <li>All configuration properties starting with <code class="language-plaintext highlighter-rouge">master.</code> have been renamed to start with <code class="language-plaintext highlighter-rouge">manager.</code> |
| instead. The <code class="language-plaintext highlighter-rouge">master.*</code> property names in the site configuration file (or passed on the |
| command-line) are converted internally to the new name, and a warning is printed. However, the old |
| name can still be used until at least the 3.0 release of Accumulo. Any <code class="language-plaintext highlighter-rouge">master.*</code> properties that |
| have been set in ZooKeeper will be automatically converted to the new <code class="language-plaintext highlighter-rouge">manager.*</code> name when |
| Accumulo is upgraded. The old property names can still be used by the <code class="language-plaintext highlighter-rouge">config</code> shell command or |
| via the methods accessible via <code class="language-plaintext highlighter-rouge">AccumuloClient</code>, but a warning will be generated when the old |
| names are used. You are encouraged to update all references to <code class="language-plaintext highlighter-rouge">master</code> in your site configuration |
| files to <code class="language-plaintext highlighter-rouge">manager</code> when installing Accumulo 2.1.</li> |
| <li>The tablet balancers in the <code class="language-plaintext highlighter-rouge">org.apache.accumulo.server.master.balancer</code> package have all been |
| relocated to <code class="language-plaintext highlighter-rouge">org.apache.accumulo.server.manager.balancer</code>. DefaultLoadBalancer has been also |
| renamed to SimpleLoadBalancer along with the move. The default balancer has been updated from |
| <code class="language-plaintext highlighter-rouge">org.apache.accumulo.server.master.balancer.TableLoadBalancer</code> to |
| <code class="language-plaintext highlighter-rouge">org.apache.accumulo.server.manager.balancer.TableLoadBalancer</code>, and the default per-table |
| balancer has been updated from <code class="language-plaintext highlighter-rouge">org.apache.accumulo.server.master.balancer.DefaultLoadBalancer</code> to |
| <code class="language-plaintext highlighter-rouge">org.apache.accumulo.server.manager.balancer.SimpleLoadBalancer</code>. If you have customized the |
| tablet balancer configuration, you are strongly encouraged to update your configuration to |
| reference the updated balancer names. If you have written a custom tablet balancer, it should be |
| updated to implement the new interface |
| <code class="language-plaintext highlighter-rouge">org.apache.accumulo.server.manager.balancer.TabletBalancer</code> rather than extending the deprecated |
| abstract <code class="language-plaintext highlighter-rouge">org.apache.accumulo.server.master.balancer.TabletBalancer</code>.</li> |
| <li>The configuration file <code class="language-plaintext highlighter-rouge">masters</code> for identifying the manager host(s) has been deprecated. If this |
| file is found, a warning will be printed. The replacement file <code class="language-plaintext highlighter-rouge">managers</code> should be used (i.e., |
| rename your masters file to managers) instead.</li> |
| <li>The <code class="language-plaintext highlighter-rouge">master</code> argument to the <code class="language-plaintext highlighter-rouge">accumulo-service</code> script has been deprecated, and the replacement |
| <code class="language-plaintext highlighter-rouge">manager</code> argument should be used instead.</li> |
| <li>The <code class="language-plaintext highlighter-rouge">-master</code> argument to the <code class="language-plaintext highlighter-rouge">org.apache.accumulo.server.util.ZooZap</code> utility has been deprecated |
| and the replacement <code class="language-plaintext highlighter-rouge">-manager</code> argument should be used instead.</li> |
| <li>The <code class="language-plaintext highlighter-rouge">GetMasterStats</code> utility has been renamed to <code class="language-plaintext highlighter-rouge">GetManagerStats</code>.</li> |
| <li><code class="language-plaintext highlighter-rouge">org.apache.accumulo.master.state.SetGoalState</code> is deprecated, and any custom scripts that invoke |
| this utility should be updated to call <code class="language-plaintext highlighter-rouge">org.apache.accumulo.manager.state.SetGoalState</code> instead.</li> |
| <li><code class="language-plaintext highlighter-rouge">masterMemory</code> in <code class="language-plaintext highlighter-rouge">minicluster.properties</code> has been deprecated and <code class="language-plaintext highlighter-rouge">managerMemory</code> should be used |
| instead in any <code class="language-plaintext highlighter-rouge">minicluster.properties</code> files you have configured.</li> |
| <li>See also <a href="https://github.com/apache/accumulo/issues/1640">#1640</a> <a href="https://github.com/apache/accumulo/issues/1642">#1642</a> <a href="https://github.com/apache/accumulo/issues/1703">#1703</a> <a href="https://github.com/apache/accumulo/issues/1704">#1704</a> <a href="https://github.com/apache/accumulo/issues/1873">#1873</a> <a href="https://github.com/apache/accumulo/issues/1907">#1907</a></li> |
| </ul> |
| |
| <h3 id="new-tracing-facility">New Tracing Facility</h3> |
| |
| <p>HTrace support was removed in this release and has been replaced with <a href="https://opentelemetry.io/">OpenTelemetry</a>. Trace information will not be shown in the monitor. See comments in <a href="https://github.com/apache/accumulo/issues/2259">#2259</a> for an example of how to configure Accumulo to emit traces to supported OpenTelemetry sinks. |
| <a href="https://github.com/apache/accumulo/issues/2257">#2257</a></p> |
| |
| <h3 id="new-metrics-implementation">New Metrics Implementation</h3> |
| |
| <p>The Hadoop Metrics2 framework is no longer being used to emit metrics from Accumulo. Accumulo is now |
| using the <a href="https://micrometer.io/">Micrometer</a> framework. Metric name and type changes have been |
| documented in org.apache.accumulo.core.metrics.MetricsProducer, see the <a href="https://static.javadoc.io/org.apache.accumulo/accumulo-core/2.1.2/org/apache/accumulo/core/metrics/MetricsProducer.html">javadoc</a> for more information. See comments in <a href="https://github.com/apache/accumulo/issues/2305">#2305</a> for an example of how to configure Accumulo to emit metrics to supported Micrometer sinks. |
| <a href="https://github.com/apache/accumulo/issues/1134">#1134</a></p> |
| |
| <h3 id="new-spi-package">New SPI Package</h3> |
| |
| <p>A new Service Plugin Interface (SPI) package was created in the accumulo-core jar, at |
| <a href="https://static.javadoc.io/org.apache.accumulo/accumulo-core/2.1.2/org/apache/accumulo/core/spi/package-summary.html">org.apache.accumulo.core.spi</a>, under which exists interfaces for the various pluggable |
| components. See <a href="https://github.com/apache/accumulo/issues/1900">#1900</a> <a href="https://github.com/apache/accumulo/issues/1905">#1905</a> <a href="https://github.com/apache/accumulo/issues/1880">#1880</a> <a href="https://github.com/apache/accumulo/issues/1891">#1891</a> <a href="https://github.com/apache/accumulo/issues/1426">#1426</a></p> |
| |
| <h2 id="minor-improvements">Minor Improvements</h2> |
| |
| <h3 id="new-listtablets-shell-command">New listtablets Shell Command</h3> |
| |
| <p>A new command was created for debugging called listtablets, that shows detailed tablet information |
| on a single line. This command aggregates data about a tablet such as status, location, size, number |
| of entries and HDFS directory name. It even shows the start and end rows of tablets, displaying them |
| in the same sorted order they are stored in the metadata. See example command output below. <a href="https://github.com/apache/accumulo/issues/1317">#1317</a> <a href="https://github.com/apache/accumulo/issues/1821">#1821</a></p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@uno&gt; listtablets -t test_ingest -h |
| 2021-01-04T15:12:47,663 [Shell.audit] INFO : root@uno&gt; listtablets -t test_ingest -h |
| NUM TABLET_DIR FILES WALS ENTRIES SIZE STATUS LOCATION ID START (Exclusive) END |
| TABLE: test_ingest |
| 1 t-0000007 1 0 60 552 HOSTED CURRENT:ip-10-113-12-25:9997 2 -INF row_0000000005 |
| 2 t-0000006 1 0 500 2.71K HOSTED CURRENT:ip-10-113-12-25:9997 2 row_0000000005 row_0000000055 |
| 3 t-0000008 1 0 5.00K 24.74K HOSTED CURRENT:ip-10-113-12-25:9997 2 row_0000000055 row_0000000555 |
| 4 default_tablet 1 0 4.44K 22.01K HOSTED CURRENT:ip-10-113-12-25:9997 2 row_0000000555 +INF |
| root@uno&gt; listtablets -t accumulo.metadata |
| 2021-01-04T15:13:21,750 [Shell.audit] INFO : root@uno&gt; listtablets -t accumulo.metadata |
| NUM TABLET_DIR FILES WALS ENTRIES SIZE STATUS LOCATION ID START (Exclusive) END |
| TABLE: accumulo.metadata |
| 1 table_info 2 0 7 524 HOSTED CURRENT:ip-10-113-12-25:9997 !0 -INF ~ |
| 2 default_tablet 0 0 0 0 HOSTED CURRENT:ip-10-113-12-25:9997 !0 ~ +INF |
| </code></pre></div></div> |
| |
| <h3 id="new-utility-for-generating-splits">New Utility for Generating Splits</h3> |
| |
| <p>A new command line utility was created to generate split points from 1 or more rfiles. One or more |
| HDFS directories can be given as well. The utility will iterate over all the files provided and |
| determine the proper split points based on either the size or number given. It uses Apache |
| Datasketches to get the split points from the data. <a href="https://github.com/apache/accumulo/issues/2361">#2361</a> <a href="https://github.com/apache/accumulo/issues/2368">#2368</a></p> |
| |
| <h3 id="new-option-for-cloning-offline">New Option for Cloning Offline</h3> |
| |
| <p>Added option to leave cloned tables offline <a href="https://github.com/apache/accumulo/issues/1474">#1474</a> <a href="https://github.com/apache/accumulo/issues/1475">#1475</a></p> |
| |
| <h3 id="new-max-tablets-option-in-bulk-import">New Max Tablets Option in Bulk Import</h3> |
| |
| <p>The property <code class="language-plaintext highlighter-rouge">table.bulk.max.tablets</code> was created in new bulk import technique. This property acts |
| as a cluster performance failsafe to prevent a single ingested file from being distributed across |
| too much of a cluster. The value is enforced by the new bulk import technique and is the maximum |
| number of tablets allowed for one bulk import file. When this property is set, an error will be |
| thrown when the value is exceeded during a bulk import. <a href="https://github.com/apache/accumulo/issues/1614">#1614</a></p> |
| |
| <h3 id="new-health-check-thread-in-tabletserver">New Health Check Thread in TabletServer</h3> |
| |
| <p>A new thread was added to the tablet server to periodically verify tablet metadata. <a href="https://github.com/apache/accumulo/issues/2320">#2320</a> |
| This thread also prints to the debug log how long it takes the tserver to scan the metadata table. |
| The property tserver.health.check.interval was added to control the frequency at which this health |
| check takes place. <a href="https://github.com/apache/accumulo/issues/2583">#2583</a></p> |
| |
| <h3 id="new-ability-for-user-to-define-context-classloaders">New ability for user to define context classloaders</h3> |
| |
| <p>Deprecated the existing VFS ClassLoader for eventual removal and created a new mechanism for users |
| to load their own classloader implementations. The new VFS classloader and VFS context classloaders |
| are in a new <a href="https://github.com/apache/accumulo-classloaders/tree/main/modules/vfs-class-loader">repo</a> and can now be specified using Java’s own system |
| properties. Alternatively, one can set their own classloader (they always could do this). <a href="https://github.com/apache/accumulo/issues/1747">#1747</a> <a href="https://github.com/apache/accumulo/issues/1715">#1715</a></p> |
| |
| <p>Please reference the Known Issues section of the 2.0.1 release notes for an issue affecting the |
| VFSClassLoader.</p> |
| |
| <h3 id="change-in-uncaught-exceptionerror-handling-in-server-side-threads">Change in uncaught Exception/Error handling in server-side threads</h3> |
| |
| <p>Consolidated and normalized thread pool and thread creation. All threads created through this code |
| path will have an UncaughtExceptionHandler attached to it that will log the fact that the Thread |
| encountered an uncaught Exception and is now dead. When an Error is encountered in a server process, |
| it will attempt to print a message to stderr then terminate the VM using Runtime.halt. On the client |
| side, the default UncaughtExceptionHandler will only log the Exception/Error in the client and does |
| not terminate the VM. Additionally, the user has the ability to set their own |
| UncaughtExceptionHandler implementation on the client. <a href="https://github.com/apache/accumulo/issues/1808">#1808</a> <a href="https://github.com/apache/accumulo/issues/1818">#1818</a> <a href="https://github.com/apache/accumulo/issues/2554">#2554</a></p> |
| |
| <h3 id="updated-hash-algorithm">Updated hash algorithm</h3> |
| |
| <p>With the default password Authenticator, Accumulo used to store password hashes using SHA-256 and |
| using custom code to add a salt. In this release, we now use Apache commons-codec to store password |
| hashes in the <code class="language-plaintext highlighter-rouge">crypt(3)</code> standard format. With this change, we’ve also defaulted to using the |
| stronger SHA-512. Existing stored password hashes (if upgrading from an earlier version of Accumulo) |
| will automatically be upgraded when users authenticate or change their passwords, and Accumulo will |
| log a warning if it detects any passwords have not been upgraded. <a href="https://github.com/apache/accumulo/issues/1787">#1787</a> <a href="https://github.com/apache/accumulo/issues/1788">#1788</a> <a href="https://github.com/apache/accumulo/issues/1798">#1798</a> <a href="https://github.com/apache/accumulo/issues/1810">#1810</a></p> |
| |
| <h3 id="various-performance-improvements-when-deleting-tables">Various Performance improvements when deleting tables</h3> |
| |
| <ul> |
| <li>Make delete table operations cancel user compactions <a href="https://github.com/apache/accumulo/issues/2030">#2030</a> <a href="https://github.com/apache/accumulo/issues/2169">#2169</a>.</li> |
| <li>Prevent compactions from starting when delete table is called <a href="https://github.com/apache/accumulo/issues/2182">#2182</a> <a href="https://github.com/apache/accumulo/issues/2240">#2240</a>.</li> |
| <li>Added check to not flush when table is being deleted <a href="https://github.com/apache/accumulo/issues/1887">#1887</a>.</li> |
| <li>Added log message before waiting for deletes to finish <a href="https://github.com/apache/accumulo/issues/1881">#1881</a>.</li> |
| <li>Added code to stop user flush if table is being deleted <a href="https://github.com/apache/accumulo/issues/1931">#1931</a></li> |
| </ul> |
| |
| <h3 id="new-monitor-pages-improvements--features">New Monitor Pages, Improvements &amp; Features</h3> |
| |
| <ul> |
| <li>A page was added to the Monitor that lists the active compactions and the longest running active |
| compaction. As an optimization, this page will only fetch data if a user loads the page and will |
| only do so a maximum of once a minute. This optimization was also added for the Active Scans page, |
| along with the addition of a “Fetched” column indicating when the data was retrieved.</li> |
| <li>A new feature was added to the TabletServer page to help users identify which tservers are in |
| recovery mode. When a tserver is recovering, its corresponding row in the TabletServer Status |
| table will be highlighted.</li> |
| <li>A new page was also created for External Compactions that allows users to see the progress of |
| compactions and other details about ongoing compactions (see below).</li> |
| </ul> |
| |
| <p><a href="https://github.com/apache/accumulo/issues/2283">#2283</a> <a href="https://github.com/apache/accumulo/issues/2294">#2294</a> <a href="https://github.com/apache/accumulo/issues/2358">#2358</a> <a href="https://github.com/apache/accumulo/issues/2663">#2663</a></p> |
| |
| <p><img src="/images/release/ec-running2.png" alt="External Compactions" style="width:85%" /></p> |
| |
| <p><img src="/images/release/ec-running-details.png" alt="External Compactions Details" style="width:85%" /></p> |
| |
| <h3 id="new-tserver-scan-timeout-property">New tserver scan timeout property</h3> |
| |
| <p>The new property <code class="language-plaintext highlighter-rouge">tserver.scan.results.max.timeout</code> was added to allow configuration of the timeout. |
| A bug was discovered where tservers were running out of memory, partially due to this timeout being |
| so short. The default value is 1 second, but now it can be increased. It is the max time for the |
| thrift client handler to wait for scan results before timing out. <a href="https://github.com/apache/accumulo/issues/2599">#2599</a> <a href="https://github.com/apache/accumulo/issues/2598">#2598</a></p> |
| |
| <h3 id="always-choose-volumes-for-new-tablet-files">Always choose volumes for new tablet files</h3> |
| |
| <p>In <a href="https://github.com/apache/accumulo/issues/1389">#1389</a>, we changed the behavior of the VolumeChooser. It now runs any time a new file is |
| created. This means VolumeChooser decisions are no longer “sticky” for tablets. This allows tablets |
| to balance their files across multiple HDFS volumes, instead of the first selected. Now, only the |
| directory name is “sticky” for a tablet, but the volume is not. So, new files will appear in a |
| directory named the same on different volumes that the VolumeChooser selects.</p> |
| |
| <h3 id="iterators-package-is-now-public-api">Iterators package is now public API</h3> |
| |
| <p><a href="https://github.com/apache/accumulo/issues/1390">#1390</a> <a href="https://github.com/apache/accumulo/issues/1400">#1400</a> <a href="https://github.com/apache/accumulo/issues/1411">#1411</a> We declared that the core.iterators package is public |
| API, so it will now follow the semver rules for public API.</p> |
| |
| <h3 id="better-accumulo-gc-memory-usage">Better accumulo-gc memory usage</h3> |
| |
| <p><a href="https://github.com/apache/accumulo/issues/1543">#1543</a> <a href="https://github.com/apache/accumulo/issues/1650">#1650</a> Switch from batching file candidates to delete based on the amount of |
| available memory, and instead use a fixed-size batching strategy. This allows the accumulo-gc to run |
| consistently using a batch size that is configurable by the user. The user is responsbile for |
| ensuring the process is given enough memory to accommodate the batch size they configure, but this |
| makes the process much more consistent and predictable.</p> |
| |
| <h3 id="log4j2">Log4j2</h3> |
| |
| <p><a href="https://github.com/apache/accumulo/issues/1528">#1528</a> <a href="https://github.com/apache/accumulo/issues/1514">#1514</a> <a href="https://github.com/apache/accumulo/issues/1515">#1515</a> <a href="https://github.com/apache/accumulo/issues/1516">#1516</a> While we still use slf4j, we have |
| upgraded the default logger binding to log4j2, which comes with a bunch of features, such as dynamic |
| reconfiguration, colorized console logging, and more.</p> |
| |
| <h3 id="added-foreach-method-to-scanner">Added forEach method to Scanner</h3> |
| |
| <p><a href="https://github.com/apache/accumulo/issues/1742">#1742</a> <a href="https://github.com/apache/accumulo/issues/1765">#1765</a> We added a forEach method to Scanner objects, so you can easily |
| iterate over the results using a lambda / BiConsumer that accepts a key-value pair.</p> |
| |
| <h3 id="new-public-api-to-set-multiple-properties-atomically">New public API to set multiple properties atomically</h3> |
| |
| <p><a href="https://github.com/apache/accumulo/issues/2692">#2692</a> We added a new public API added to support setting multiple properties at once |
| atomically using a read-modify-write pattern. This is available for table, namespace, and system |
| properties, and is called <code class="language-plaintext highlighter-rouge">modifyProperties()</code>. This builds off a related change that allows us to |
| more efficiently store and properties in ZooKeeper, which also results in fewer ZooKeeper watches.</p> |
| |
| <h3 id="simplified-cluster-configuration">Simplified cluster configuration</h3> |
| |
| <p><a href="https://github.com/apache/accumulo/issues/2138">#2138</a> <a href="https://github.com/apache/accumulo/issues/2903">#2903</a> Modified the accumulo-cluster script to read the server locations from a single |
| file, cluster.yaml, in the conf directory instead of multiple files (tserver, manager, gc, etc.). Starting the new scan server and compactor server types is supported using this new file. It also contains options for starting multiple Tablet and Scan Servers per host.</p> |
| |
| <h3 id="other-notable-changes">Other notable changes</h3> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/1174">#1174</a> <a href="https://github.com/apache/accumulo/issues/816">#816</a> Abstract metadata and change root metadata schema</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1309">#1309</a> Explicitly prevent cloning metadata table to prevent poor user experience</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1313">#1313</a> <a href="https://github.com/apache/accumulo/issues/936">#936</a> Store Root Tablet list of files in Zookeeper</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1294">#1294</a> <a href="https://github.com/apache/accumulo/issues/1299">#1299</a> Add optional -t tablename to importdirectory shell command.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1332">#1332</a> Disable FileSystemMonitor checks of /proc by default (to be removed in future)</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1345">#1345</a> <a href="https://github.com/apache/accumulo/issues/1352">#1352</a> Optionally disable gc-initiated compactions/flushes</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1397">#1397</a> <a href="https://github.com/apache/accumulo/issues/1461">#1461</a> Replace relative paths in the metadata tables on upgrade.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1456">#1456</a> <a href="https://github.com/apache/accumulo/issues/1457">#1457</a> Prevent catastrophic tserver shutdown by rate limiting the shutdown</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1053">#1053</a> <a href="https://github.com/apache/accumulo/issues/1060">#1060</a> <a href="https://github.com/apache/accumulo/issues/1576">#1576</a> Support multiple volumes in import table</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1568">#1568</a> Support multiple tservers / node in accumulo-service</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1644">#1644</a> <a href="https://github.com/apache/accumulo/issues/1645">#1645</a> Fix issue with minor compaction not retrying</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1660">#1660</a> Dropped unused MemoryManager property</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1764">#1764</a> <a href="https://github.com/apache/accumulo/issues/1783">#1783</a> Parallelize listcompactions in shell</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1797">#1797</a> Add table option to shell delete command.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/2039">#2039</a> <a href="https://github.com/apache/accumulo/issues/2045">#2045</a> Add bulk import option to ignore empty dirs</li> |
| <li><a href="https://github.com/apache/accumulo/issues/2117">#2117</a> <a href="https://github.com/apache/accumulo/issues/2236">#2236</a> Make sorted recovery write to RFiles. New <code class="language-plaintext highlighter-rouge">tserver.wal.sort.file.</code> |
| property to configure</li> |
| <li><a href="https://github.com/apache/accumulo/issues/2076">#2076</a> Sorted recovery files can now be encrypted</li> |
| <li><a href="https://github.com/apache/accumulo/issues/2441">#2441</a> Upgraded to Junit 5</li> |
| <li><a href="https://github.com/apache/accumulo/issues/2462">#2462</a> Added SUBMITTED FaTE status to differentiate between things submitted vs. running</li> |
| <li><a href="https://github.com/apache/accumulo/issues/2467">#2467</a> Added fate shell command option to cancel FaTE operations that are NEW or SUBMITTED</li> |
| <li><a href="https://github.com/apache/accumulo/issues/2807">#2807</a> Added several troubleshooting utilities to the <code class="language-plaintext highlighter-rouge">accumulo admin</code> command.</li> |
| <li><a href="https://github.com/apache/accumulo/issues/2820">#2820</a> <a href="https://github.com/apache/accumulo/issues/2900">#2900</a> <code class="language-plaintext highlighter-rouge">du</code> command performance improved by using the metadata table for |
| computation instead of HDFS</li> |
| <li><a href="https://github.com/apache/accumulo/issues/2966">#2966</a> Upgrade Thrift to 0.17.0</li> |
| </ul> |
| |
| <h2 id="upgrading">Upgrading</h2> |
| |
| <p>View the <a href="/docs/2.x/administration/upgrading">Upgrading Accumulo documentation</a> for guidance.</p> |
| |
| <h2 id="210-github-project">2.1.0 GitHub Project</h2> |
| |
| <p><a href="https://github.com/apache/accumulo/projects/3">All tickets related to 2.1.0.</a></p> |
| |
| <h2 id="known-issues">Known Issues</h2> |
| |
| <p>At the time of release, the following issues were known:</p> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/3045">#3045</a> - External compactions may appear stuck until the coordinator is restarted</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3048">#3048</a> - The monitor may not show times in the correct format for the user’s locale</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3053">#3053</a> - ThreadPool creation is a bit spammy by default in the debug logs</li> |
| <li><a href="https://github.com/apache/accumulo/issues/3057">#3057</a> - The monitor may have an annoying popup on the external compactions page if the |
| coordinator is offline</li> |
| </ul> |
| |
| </description> |
| <pubDate>Tue, 01 Nov 2022 00:00:00 +0000</pubDate> |
| <link>https://accumulo.apache.org/release/accumulo-2.1.0/</link> |
| <guid isPermaLink="true">https://accumulo.apache.org/release/accumulo-2.1.0/</guid> |
| |
| |
| <category>release</category> |
| |
| </item> |
| |
| <item> |
| <title>2.1.0 Metrics and Tracing Changes</title> |
| <description><p>Metrics and Tracing changed in 2.1.0. This post explains the new implementations and provides examples on how to configure them.</p> |
| |
| <h1 id="metrics">Metrics</h1> |
| |
| <p>Accumulo was <a href="https://issues.apache.org/jira/browse/ACCUMULO-1817">modified</a> in version 1.7.0 (2015) to use the Hadoop Metrics2 framework for capturing and emitting internal Accumulo metrics. <a href="https://micrometer.io/">Micrometer</a>, a newer metrics framework, supports sending metrics to many popular <a href="https://micrometer.io/docs/concepts#_supported_monitoring_systems">monitoring systems</a>. In Accumulo 2.1.0 support for the Hadoop Metrics2 framework has been removed in favor of using Micrometer. Metrics are disabled by default.</p> |
| |
| <p>Micrometer has the concept of a <a href="https://micrometer.io/docs/concepts#_registry">MeterRegistry</a>, which is used to create and emit metrics to the supported monitoring systems. Additionally, Micrometer supports sending metrics to multiple monitoring systems concurrently. Configuring Micrometer in Accumulo will require you to write a small peice of code to provide the MeterRegistry configuration. Specifically, you will need to create a class that implements <a href="https://github.com/apache/accumulo/blob/main/core/src/main/java/org/apache/accumulo/core/metrics/MeterRegistryFactory.java">MeterRegistryFactory</a>. Your implementation will need to create and configure the appropriate MeterRegistry. Additionally, you will need to add the MeterRegistry jar file and the jar file containing your MeterRegistryFactory implementation to Accumulo’s classpath. The page for each monitoring system that Micrometer supports contains instructions on how to configure the registry and which jar file is required.</p> |
| |
| <p>Accumulo’s metrics integration test uses a <a href="https://github.com/apache/accumulo/blob/main/test/src/main/java/org/apache/accumulo/test/metrics/TestStatsDRegistryFactory.java">TestStatsDRegistryFactory</a> to create and configure a <a href="https://micrometer.io/docs/registry/statsD">StatsD Meter Registry</a>. The instructions below provide an example of how to use this class to emit Accumulo’s metrics to a Telegraf - InfluxDB - Grafana monitoring stack.</p> |
| |
| <h2 id="metrics-example">Metrics Example</h2> |
| |
| <p>This example uses a Docker container that contains Telegraf-InfluxDB-Grafana system. We will configure Accumulo to send metrics to the <a href="https://www.influxdata.com/time-series-platform/telegraf/">Telegraf</a> component running in the Docker image. Telegraf will persist the metrics in <a href="https://www.influxdata.com/products/influxdb-overview/">InfluxDB</a> and then we will visualize the metrics using <a href="https://grafana.com/">Grafana</a>. This example assumes that you have installed Docker (or equivalent engine) and have an Accumulo database already installed and initialized. We will be installing some things, modifying the Accumulo configuration, and starting Accumulo.</p> |
| |
| <ol> |
| <li>Download the Telegraf-Influx-Grafana (TIG) Docker image |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker pull artlov/docker-telegraf-influxdb-grafana:latest |
| </code></pre></div> </div> |
| </li> |
| <li>Create directories for the Docker container |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mkdir -p /tmp/metrics/influxdb |
| chmod 777 /tmp/metrics/influxdb |
| mkdir /tmp/metrics/grafana |
| mkdir /tmp/metrics/grafana-dashboards |
| mkdir -p /tmp/metrics/telegraf/conf |
| </code></pre></div> </div> |
| </li> |
| <li>Download Telegraf configuration and Grafana dashboard |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd /tmp/metrics/telegraf/conf |
| wget https://raw.githubusercontent.com/apache/accumulo-testing/main/contrib/terraform-testing-infrastructure/modules/config-files/templates/telegraf.conf.tftpl |
| cat telegraf.conf.tftpl | sed "s/\${manager_ip}/localhost/" &gt; telegraf.conf |
| cd /tmp/metrics/grafana-dashboards |
| wget https://raw.githubusercontent.com/apache/accumulo-testing/main/contrib/terraform-testing-infrastructure/modules/config-files/files/grafana_dashboards/accumulo-dashboard.json |
| wget https://raw.githubusercontent.com/apache/accumulo-testing/main/contrib/terraform-testing-infrastructure/modules/config-files/files/grafana_dashboards/accumulo-dashboard.yaml |
| </code></pre></div> </div> |
| </li> |
| <li>Start the TIG Docker container |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run --ulimit nofile=66000:66000 -d --rm \ |
| --name tig-stack \ |
| -p 3003:3003 \ |
| -p 3004:8888 \ |
| -p 8086:8086 \ |
| -p 22022:22 \ |
| -p 8125:8125/udp \ |
| -v /tmp/metrics/influxdb:/var/lib/influxdb \ |
| -v /tmp/metrics/grafana:/var/lib/grafana \ |
| -v /tmp/metrics/telegraf/conf:/etc/telegraf \ |
| -v /tmp/metrics/grafana-dashboards:/etc/grafana/provisioning/dashboards \ |
| artlov/docker-telegraf-influxdb-grafana:latest |
| </code></pre></div> </div> |
| </li> |
| <li>Download Micrometer StatsD Meter Registry jar |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>wget -O micrometer-registry-statsd-1.9.1.jar https://search.maven.org/remotecontent?filepath=io/micrometer/micrometer-registry-statsd/1.9.1/micrometer-registry-statsd-1.9.1.jar |
| </code></pre></div> </div> |
| </li> |
| <li>At a mininum you need to enable the metrics using the property <code class="language-plaintext highlighter-rouge">general.micrometer.enabled</code> and supply the name of the MeterRegistryFactory class using the property <code class="language-plaintext highlighter-rouge">general.micrometer.factory</code>. To enable <a href="https://micrometer.io/docs/ref/jvm">JVM</a> metrics, use the property <code class="language-plaintext highlighter-rouge">general.micrometer.jvm.metrics.enabled</code>. Modify the accumulo.properties configuration file by adding the properties below. |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Micrometer settings |
| general.micrometer.enabled=true |
| general.micrometer.jvm.metrics.enabled=true |
| general.micrometer.factory=org.apache.accumulo.test.metrics.TestStatsDRegistryFactory |
| </code></pre></div> </div> |
| </li> |
| <li> |
| <p>Copy the micrometer-registry-statsd-1.9.1.jar and accumulo-test.jar into the Accumulo lib directory</p> |
| </li> |
| <li>The TestStatsDRegistryFactory uses system properties to determine the host and port of the StatsD server. In this example the Telegraf component started in step 4 above contains a StatsD server listening on localhost:8125. Configure the TestStatsDRegistryFactory by adding the following system properties to the JAVA_OPTS variable in accumulo-env.sh. |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>"-Dtest.meter.registry.host=127.0.0.1" |
| "-Dtest.meter.registry.port=8125" |
| </code></pre></div> </div> |
| </li> |
| <li>Start Accumulo. You should see the following statement in the server log files |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[metrics.MetricsUtil] INFO : initializing metrics, enabled:true, class:org.apache.accumulo.test.metrics.TestStatsDRegistryFactory |
| </code></pre></div> </div> |
| </li> |
| <li>Log into Grafana (http://localhost:3003/) using the default credentials (root/root). Click the <code class="language-plaintext highlighter-rouge">Home</code> icon at the top, then click the <code class="language-plaintext highlighter-rouge">Accumulo Micrometer Test Dashboard</code>. If everything is working correctly, then you should see something like the image below.</li> |
| </ol> |
| |
| <p><img src="/images/blog/202206_metrics_and_tracing/Grafana_Screenshot.png" alt="Grafana Screenshot" /></p> |
| |
| <h1 id="tracing">Tracing</h1> |
| |
| <p>With the retirement of HTrace, Accumulo has selected to replace it’s tracing functionality with <a href="https://opentelemetry.io/">OpenTelemetry</a> in version 2.1.0. Hadoop appears to be on the same <a href="https://issues.apache.org/jira/browse/HADOOP-15566">path</a> which, when finished, should provide better insight into Accumulo’s use of HDFS. OpenTelemetry supports exporting Trace information to several different systems, to include <a href="https://www.jaegertracing.io/">Jaeger</a>, <a href="https://zipkin.io/">Zipkin</a>, and others. The HTrace trace spans in the Accumulo source code have been updated to use OpenTelemetry trace spans. If tracing is enabled, then Accumulo will use the OpenTelemetry implementation registered with the <a href="https://github.com/open-telemetry/opentelemetry-java/blob/main/api/all/src/main/java/io/opentelemetry/api/GlobalOpenTelemetry.java">GlobalOpenTelemetry</a> object. Tracing is disabled by default and a no-op OpenTelemetry implementation is used.</p> |
| |
| <h2 id="tracing-example">Tracing Example</h2> |
| |
| <p>This example uses the OpenTelemetry Java Agent jar file to configure and export trace information to Jaeger. The OpenTelemetry Java Agent jar file bundles together the supported Java exporters, provides a way to <a href="https://github.com/open-telemetry/opentelemetry-java/tree/main/sdk-extensions/autoconfigure">configure</a> them, and registers them with the GlobalOpenTelemetry singleton that is used by Accumulo. An alternate method to supplying the OpenTelemetry dependencies, without using the Java Agent jar file, is to create a shaded jar with the OpenTelemetry <a href="https://github.com/open-telemetry/opentelemetry-java/tree/main/sdk-extensions/autoconfigure">autoconfigure</a> module and it’s runtime dependencies and place the resulting shaded jar on the classpath. An example Maven pom.xml file to create the shaded jar is <a href="https://github.com/apache/accumulo/pull/2259#issuecomment-965571339">here</a>. When using this alternate method you can skip step 2 and the uncommenting of the java agent in step 5 below.</p> |
| |
| <ol> |
| <li>Download Jaeger all-in-one Docker image |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> docker pull jaegertracing/all-in-one:1.35 |
| </code></pre></div> </div> |
| </li> |
| <li>Download OpenTelemetry Java Agent (https://github.com/open-telemetry/opentelemetry-java/tree/main/sdk-extensions/autoconfigure) |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> wget -O opentelemetry-javaagent-1.15.0.jar https://search.maven.org/remotecontent?filepath=io/opentelemetry/javaagent/opentelemetry-javaagent/1.15.0/opentelemetry-javaagent-1.15.0.jar |
| </code></pre></div> </div> |
| </li> |
| <li>To enable tracing, you need to set the <code class="language-plaintext highlighter-rouge">general.opentelemetry.enabled</code> property. Modify the accumulo.properties configuration file and add the following property. |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># OpenTelemetry settings |
| general.opentelemetry.enabled=true |
| </code></pre></div> </div> |
| </li> |
| <li>To enable tracing in the shell, set the <code class="language-plaintext highlighter-rouge">general.opentelemetry.enabled</code> property in the accumulo-client.properties configuration file. |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># OpenTelemetry settings |
| general.opentelemetry.enabled=true |
| </code></pre></div> </div> |
| </li> |
| <li>Configure the OpenTelemetry JavaAgent in accumulo-env.sh by uncommenting the following and updating the path to the java agent jar: |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> ## Optionally setup OpenTelemetry SDK AutoConfigure |
| ## See https://github.com/open-telemetry/opentelemetry-java/tree/main/sdk-extensions/autoconfigure |
| #JAVA_OPTS=('-Dotel.traces.exporter=jaeger' '-Dotel.metrics.exporter=none' '-Dotel.logs.exporter=none' "${JAVA_OPTS[@]}") |
| ## Optionally setup OpenTelemetry Java Agent |
| ## See https://github.com/open-telemetry/opentelemetry-java-instrumentation for more options |
| #JAVA_OPTS=('-javaagent:path/to/opentelemetry-javaagent.jar' "${JAVA_OPTS[@]}") |
| </code></pre></div> </div> |
| </li> |
| <li>Start Jaeger Docker container |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run -d --rm --name jaeger \ |
| -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \ |
| -p 5775:5775/udp \ |
| -p 6831:6831/udp \ |
| -p 6832:6832/udp \ |
| -p 5778:5778 \ |
| -p 16686:16686 \ |
| -p 14268:14268 \ |
| -p 14250:14250 \ |
| -p 9411:9411 jaegertracing/all-in-one:1.35 |
| </code></pre></div> </div> |
| </li> |
| <li>Start Accumulo. You should see the following statement in the server log files |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[trace.TraceUtil] INFO : Trace enabled in Accumulo: yes, OpenTelemetry instance: class io.opentelemetry.javaagent.instrumentation.opentelemetryapi.v1_10.ApplicationOpenTelemetry110, Tracer instance: class io.opentelemetry.javaagent.instrumentation.opentelemetryapi.trace.ApplicationTracer |
| </code></pre></div> </div> |
| </li> |
| <li>View traces in Jaeger UI at http://localhost:16686. You can select the service name on the left panel and click <code class="language-plaintext highlighter-rouge">Find Traces</code> to view the trace information. If everything is working correctly, then you should see something like the image below.</li> |
| </ol> |
| |
| <p><img src="/images/blog/202206_metrics_and_tracing/Jaeger_Screenshot.png" alt="Jaeger Screenshot" /></p> |
| </description> |
| <pubDate>Wed, 22 Jun 2022 00:00:00 +0000</pubDate> |
| <link>https://accumulo.apache.org/blog/2022/06/22/2.1.0-metrics-and-tracing.html</link> |
| <guid isPermaLink="true">https://accumulo.apache.org/blog/2022/06/22/2.1.0-metrics-and-tracing.html</guid> |
| |
| |
| <category>blog</category> |
| |
| </item> |
| |
| <item> |
| <title>Apache Accumulo 1.10.2</title> |
| <description><h2 id="about">About</h2> |
| |
| <p>Apache Accumulo 1.10.2 is a bug fix release of the 1.10 LTM release line.</p> |
| |
| <p>These release notes are highlights of the changes since 1.10.1. The full |
| detailed changes can be seen in the git history. If anything important is |
| missing from this list, please <a href="/contact-us">contact</a> us to have it included.</p> |
| |
| <p>Users of 1.10.1 or earlier are encouraged to upgrade to 1.10.2, as this is a |
| continuation of the 1.10 LTM release line with bug fixes and improvements, and |
| it supersedes any prior 1.x version. Users are also encouraged to consider |
| migrating to a 2.x version when one that is suitable for their needs becomes |
| available.</p> |
| |
| <h2 id="known-issues">Known Issues</h2> |
| |
| <p>Apache Commons VFS was upgraded in <a href="https://github.com/apache/accumulo/issues/1295">#1295</a> and some users have reported |
| issues similar to <a href="https://issues.apache.org/jira/projects/VFS/issues/VFS-683">VFS-683</a>. Possible solutions are discussed in <a href="https://github.com/apache/accumulo/issues/2775">#2775</a>.</p> |
| |
| <h2 id="major-improvements">Major Improvements</h2> |
| |
| <p>This release bundles <a href="https://reload4j.qos.ch/">reload4j</a> (<a href="https://github.com/apache/accumulo/issues/2458">#2458</a>) in |
| the convenience binary and uses that instead of log4j 1.2. This is to make it |
| easier for users to avoid the many CVEs that apply to log4j 1.2, which is no |
| longer being maintained. Accumulo 2.x versions will have already switched to |
| use the latest log4j 2. However, doing so required making some breaking API |
| changes and other substantial changes, so that can’t be done for Accumulo 1.10. |
| Using reload4j instead, was deemed to be a viable interim solution until |
| Accumulo 2.x.</p> |
| |
| <h3 id="other-improvements">Other Improvements</h3> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/1808">#1808</a> Re-throw exceptions in threads instead of merely logging them</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1863">#1863</a> Avoid unnecessory redundant log sorting</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1917">#1917</a> Ensure RFileWriterBuilder API validates filenames</li> |
| <li><a href="https://github.com/apache/accumulo/issues/2006">#2006</a> Detect system config changes in HostRegexTableLoadBalancer without restarting master</li> |
| <li><a href="https://github.com/apache/accumulo/issues/2464">#2464</a> Apply timeout to socket.connect()</li> |
| </ul> |
| |
| <h3 id="other-bug-fixes">Other Bug Fixes</h3> |
| |
| <ul> |
| <li><a href="https://github.com/apache/accumulo/issues/1775">#1775</a> Ensure monitor reports a dead tserver when it is killed</li> |
| <li><a href="https://github.com/apache/accumulo/issues/1858">#1858</a> Fix a bug in the monitor graphs due to use of int instead of long</li> |
| <li><a href="https://github.com/apache/accumulo/issues/2370">#2370</a> Fix bug in getsplits command in the shell</li> |
| </ul> |
| |
| <h2 id="note-about-jdk-15">Note About JDK 15</h2> |
| |
| <p>See the note in the 1.10.1 release notes about the use of JDK 15 or later, as |
| the information pertaining to the use of the CMS garbage collector remains |
| applicable to this version.</p> |
| |
| <h2 id="useful-links">Useful Links</h2> |
| |
| <ul> |
| <li><a href="https://lists.apache.org/thread/bq424vnov27nwnkb471oxg5nd7m6xwn9">Release VOTE email thread</a></li> |
| <li><a href="https://github.com/apache/accumulo/compare/rel/1.10.1...apache:rel/1.10.2">All Changes since 1.10.1</a></li> |
| <li><a href="https://github.com/apache/accumulo/issues?q=project%3Aapache%2Faccumulo%2F18">GitHub</a> - List of issues tracked on GitHub corresponding to this release</li> |
| </ul> |
| |
| </description> |
| <pubDate>Sun, 13 Feb 2022 00:00:00 +0000</pubDate> |
| <link>https://accumulo.apache.org/release/accumulo-1.10.2/</link> |
| <guid isPermaLink="true">https://accumulo.apache.org/release/accumulo-1.10.2/</guid> |
| |
| |
| <category>release</category> |
| |
| </item> |
| |
| <item> |
| <title>External Compactions</title> |
| <description><p>External compactions are a new feature in Accumulo 2.1.0 which allows |
| compaction work to run outside of Tablet Servers.</p> |
| |
| <h2 id="overview">Overview</h2> |
| |
| <p>There are two types of <a href="https://storage.googleapis.com/pub-tools-public-publication-data/pdf/68a74a85e1662fe02ff3967497f31fda7f32225c.pdf">compactions</a> in Accumulo - Minor and Major. Minor |
| compactions flush recently written data from memory to a new file. Major |
| compactions merge two or more Tablet files together into one new file. Starting |
| in 2.1 Tablet Servers can run multiple major compactions for a Tablet |
| concurrently; there is no longer a single thread pool per Tablet Server that |
| runs compactions. Major compactions can be resource intensive and may run for a |
| long time depending on several factors, to include the number and size of the |
| input files, and the iterators configured to run during major compaction. |
| Additionally, the Tablet Server does not currently have a mechanism in place to |
| stop a major compaction that is taking too long or using too many resources. |
| There is a mechanism to throttle the read and write speed of major compactions |
| as a way to reduce the resource contention on a Tablet Server where many |
| concurrent compactions are running. However, throttling compactions on a busy |
| system will just lead to an increasing amount of queued compactions. Finally, |
| major compaction work can be wasted in the event of an untimely death of the |
| Tablet Server or if a Tablet is migrated to another Tablet Server.</p> |
| |
| <p>An external compaction is a major compaction that occurs outside of a Tablet |
| Server. The external compaction feature is an extension of the major compaction |
| service in the Tablet Server and is configured as part of the systems |
| compaction service configuration. Thus, it is an optional feature. The goal of |
| the external compaction feature is to overcome some of the drawbacks of the |
| Major compactions that happen inside the Tablet Server. Specifically, external |
| compactions:</p> |
| |
| <ul> |
| <li>Allow major compactions to continue when the originating TabletServer dies</li> |
| <li>Allow major compactions to occur while a Tablet migrates to a new Tablet Server</li> |
| <li>Reduce the load on the TabletServer, giving it more cycles to insert mutations and respond to scans (assuming it’s running on different hosts). MapReduce jobs and compactions can lower the effectiveness of processor and page caches for scans, so moving compactions off the host can be beneficial.</li> |
| <li>Allow major compactions to be scaled differently than the number of TabletServers, giving users more flexibility in allocating resources.</li> |
| <li>Even out hotspots where a few Tablet Servers have a lot of compaction work. External compactions allow this work to spread much wider than previously possible.</li> |
| </ul> |
| |
| <p>The external compaction feature in Apache Accumulo version 2.1.0 adds two new |
| system-level processes and new configuration properties. The new system-level |
| processes are the Compactor and the Compaction Coordinator.</p> |
| |
| <ul> |
| <li>The Compactor is a process that is responsible for executing a major compaction. There can be many Compactor’s running on a system. The Compactor communicates with the Compaction Coordinator to get information about the next major compaction it will run and to report the completion state.</li> |
| <li>The Compaction Coordinator is a single process like the Manager. It is responsible for communicating with the Tablet Servers to gather information about queued external compactions, to reserve a major compaction on the Compactor’s behalf, and to report the completion status of the reserved major compaction. For external compactions that complete when the Tablet is offline, the Compaction Coordinator buffers this information and reports it later.</li> |
| </ul> |
| |
| <h2 id="details">Details</h2> |
| |
| <p>Before we explain the implementation for external compactions, it’s probably |
| useful to explain the changes for major compactions that were made in the 2.1.0 |
| branch before external compactions were added. This is most apparent in the |
| <code class="language-plaintext highlighter-rouge">tserver.compaction.major.service</code> and <code class="language-plaintext highlighter-rouge">table.compaction.dispatcher</code> configuration |
| properties. The simplest way to explain this is that you can now define a |
| service for executing compactions and then assign that service to a table |
| (which implies you can have multiple services assigned to different tables). |
| This gives the flexibility to prevent one table’s compactions from impacting |
| another table. Each service has named thread pools with size thresholds.</p> |
| |
| <h3 id="configuration">Configuration</h3> |
| |
| <p>The configuration below defines a compaction service named cs1 using |
| the DefaultCompactionPlanner that is configured to have three named thread |
| pools (small, medium, and large). Each thread pool is configured with a number |
| of threads to run compactions and a size threshold. If the sum of the input |
| file sizes is less than 16MB, then the major compaction will be assigned to the |
| small pool, for example.</p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>tserver.compaction.major.service.cs1.planner=org.apache.accumulo.core.spi.compaction.DefaultCompactionPlanner |
| tserver.compaction.major.service.cs1.planner.opts.executors=[ |
| {"name":"small","type":"internal","maxSize":"16M","numThreads":8}, |
| {"name":"medium","type":"internal","maxSize":"128M","numThreads":4}, |
| {"name":"large","type":"internal","numThreads":2}] |
| </code></pre></div></div> |
| |
| <p>To assign compaction service cs1 to the table ci, you would use the following properties:</p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>config -t ci -s table.compaction.dispatcher=org.apache.accumulo.core.spi.compaction.SimpleCompactionDispatcher |
| config -t ci -s table.compaction.dispatcher.opts.service=cs1 |
| </code></pre></div></div> |
| |
| <p>A small modification to the |
| tserver.compaction.major.service.cs1.planner.opts.executors property in the |
| example above would enable it to use external compactions. For example, let’s |
| say that we wanted all of the large compactions to be done externally, you |
| would use this configuration:</p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>tserver.compaction.major.service.cs1.planner.opts.executors=[ |
| {"name":"small","type":"internal","maxSize":"16M","numThreads":8}, |
| {"name":"medium","type":"internal","maxSize":"128M","numThreads":4}, |
| {"name":"large","type":"external","queue":"DCQ1"}]' |
| </code></pre></div></div> |
| |
| <p>In this example the queue DCQ1 can be any arbitrary name and allows you to |
| define multiple pools of Compactor’s.</p> |
| |
| <p>Behind these new configurations in 2.1 lies a new algorithm for choosing which |
| files to compact. This algorithm attempts to find the smallest set of files |
| that meets the compaction ratio criteria. Prior to 2.1, Accumulo looked for the |
| largest set of files that met the criteria. Both algorithms do logarithmic |
| amounts of work. The new algorithm better utilizes multiple thread pools |
| available for running comactions of different sizes.</p> |
| |
| <h3 id="compactor">Compactor</h3> |
| |
| <p>A Compactor is started with the name of the queue for which it will complete |
| major compactions. You pass in the queue name when starting the Compactor, like |
| so:</p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bin/accumulo compactor -q DCQ1 |
| </code></pre></div></div> |
| |
| <p>Once started the Compactor tries to find the location of the |
| Compaction Coordinator in ZooKeeper and connect to it. Then, it asks the |
| Compaction Coordinator for the next compaction job for the queue. The |
| Compaction Coordinator will return to the Compactor the necessary information to |
| run the major compaction, assuming there is work to be done. Note that the |
| class performing the major compaction in the Compactor is the same one used in |
| the Tablet Server, so we are just transferring all of the input parameters from |
| the Tablet Server to the Compactor. The Compactor communicates information back |
| to the Compaction Coordinator when the compaction has started, finished |
| (successfully or not), and during the compaction (progress updates).</p> |
| |
| <h3 id="compaction-coordinator">Compaction Coordinator</h3> |
| |
| <p>The Compaction Coordinator is a singleton process in the system like the |
| Manager. Also, like the Manager it supports standby Compaction Coordinator’s |
| using locks in ZooKeeper. The Compaction Coordinator is started using the |
| command:</p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bin/accumulo compaction-coordinator |
| </code></pre></div></div> |
| |
| <p>When running, the Compaction Coordinator polls the TabletServers for summary |
| information about their external compaction queues. It keeps track of the major |
| compaction priorities for each Tablet Server and queue. When a Compactor |
| requests the next major compaction job the Compaction Coordinator finds the |
| Tablet Server with the highest priority major compaction for that queue and |
| communicates with that Tablet Server to reserve an external compaction. The |
| priority in this case is an integer value based on the number of input files |
| for the compaction. For system compactions, the number is negative starting at |
| -32768 and increasing to -1 and for user compactions it’s a non-negative number |
| starting at 0 and limited to 32767. When the Tablet Server reserves the |
| external compaction an entry is written into the metadata table row for the |
| Tablet with the address of the Compactor running the compaction and all of the |
| configuration information passed back from the Tablet Server. Below is an |
| example of the ecomp metadata column:</p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>2;10ba2e8ba2e8ba5 ecomp:ECID:94db8374-8275-4f89-ba8b-4c6b3908bc50 [] {"inputs":["hdfs://accucluster/accumulo/tables/2/t-00000ur/A00001y9.rf","hdfs://accucluster/accumulo/tables/2/t-00000ur/C00005lp.rf","hdfs://accucluster/accumulo/tables/2/t-00000ur/F0000dqm.rf","hdfs://accucluster/accumulo/tables/2/t-00000ur/F0000dq1.rf"],"nextFiles":[],"tmp":"hdfs://accucluster/accumulo/tables/2/t-00000ur/C0000dqs.rf_tmp","compactor":"10.2.0.139:9133","kind":"SYSTEM","executorId":"DCQ1","priority":-32754,"propDels":true,"selectedAll":false} |
| </code></pre></div></div> |
| |
| <p>When the Compactor notifies the Compaction Coordinator that it has finished the |
| major compaction, the Compaction Coordinator attempts to notify the Tablet |
| Server and inserts an external compaction final state marker into the metadata |
| table. Below is an example of the final state marker:</p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~ecompECID:de6afc1d-64ae-4abf-8bce-02ec0a79aa6c : [] {"extent":{"tableId":"2"},"state":"FINISHED","fileSize":12354,"entries":100000} |
| </code></pre></div></div> |
| |
| <p>If the Compaction Coordinator is able to reach the Tablet Server and that Tablet |
| Server is still hosting the Tablet, then the compaction is committed and both |
| of the entries are removed from the metadata table. In the case that the Tablet |
| is offline when the compaction attempts to commit, there is a thread in the |
| Compaction Coordinator that looks for completed, but not yet committed, external |
| compactions and periodically attempts to contact the Tablet Server hosting the |
| Tablet to commit the compaction. The Compaction Coordinator periodically removes |
| the final state markers related to Tablets that no longer exist. In the case of |
| an external compaction failure the Compaction Coordinator notifies the Tablet |
| and the Tablet cleans up file reservations and removes the metadata entry.</p> |
| |
| <h3 id="edge-cases">Edge Cases</h3> |
| |
| <p>There are several situations involving external compactions that we tested as part of this feature. These are:</p> |
| |
| <ul> |
| <li>Tablet migration</li> |
| <li>When a user initiated compaction is canceled</li> |
| <li>What a Table is taken offline</li> |
| <li>When a Tablet is split or merged</li> |
| <li>Coordinator restart</li> |
| <li>Tablet Server death</li> |
| <li>Table deletion</li> |
| </ul> |
| |
| <p>Compactors periodically check if the compaction they are running is related to |
| a deleted table, split/merged Tablet, or canceled user initiated compaction. If |
| any of these cases happen the Compactor interrupts the compaction and notifies |
| the Compaction Coordinator. An external compaction continues in the case of |
| Tablet Server death, Tablet migration, Coordinator restart, and the Table being |
| taken offline.</p> |
| |
| <h2 id="cluster-test">Cluster Test</h2> |
| |
| <p>The following tests were run on a cluster to exercise this new feature.</p> |
| |
| <ol> |
| <li>Run continuous ingest for 24h with large compactions running externally in an autoscaled Kubernetes cluster.</li> |
| <li>After ingest completion, started a full table compaction with all compactions running externally.</li> |
| <li>Run continuous ingest verification process that looks for lost data.</li> |
| </ol> |
| |
| <h3 id="setup">Setup</h3> |
| |
| <p>For these tests Accumulo, Zookeeper, and HDFS were run on a cluster in Azure |
| setup by Muchos and external compactions were run in a separate Kubernetes |
| cluster running in Azure. The Accumulo cluster had the following |
| configuration.</p> |
| |
| <ul> |
| <li>Centos 7</li> |
| <li>Open JDK 11</li> |
| <li>Zookeeper 3.6.2</li> |
| <li>Hadoop 3.3.0</li> |
| <li>Accumulo 2.1.0-SNAPSHOT <a href="https://github.com/apache/accumulo/commit/dad7e01ae7d450064cba5d60a1e0770311ebdb64">dad7e01</a></li> |
| <li>23 D16s_v4 VMs, each with 16x128G HDDs stripped using LVM. 22 were workers.</li> |
| </ul> |
| |
| <p>The following diagram shows how the two clusters were setup. The Muchos and |
| Kubernetes clusters were on the same private vnet, each with its own /16 subnet |
| in the 10.x.x.x IP address space. The Kubernetes cluster that ran external |
| compactions was backed by at least 3 D8s_v4 VMs, with VMs autoscaling with the |
| number of pods running.</p> |
| |
| <p><img src="/images/blog/202107_ecomp/clusters-layout.png" alt="Cluster Layout" /></p> |
| |
| <p>One problem we ran into was communication between Compactors running inside |
| Kubernetes with processes like the Compaction Coordinator and DataNodes running |
| outside of Kubernetes in the Muchos cluster. For some insights into how these |
| problems were overcome, checkout the comments in the <a href="/images/blog/202107_ecomp/accumulo-compactor-muchos.yaml">deployment |
| spec</a> used.</p> |
| |
| <h3 id="configuration-1">Configuration</h3> |
| |
| <p>The following Accumulo shell commands set up a new compaction service named |
| cs1. This compaction service has an internal executor with 4 threads named |
| small for compactions less than 32M, an internal executor with 2 threads named |
| medium for compactions less than 128M, and an external compaction queue named |
| DCQ1 for all other compactions.</p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>config -s 'tserver.compaction.major.service.cs1.planner.opts.executors=[{"name":"small","type":"internal","maxSize":"32M","numThreads":4},{"name":"medium","type":"internal","maxSize":"128M","numThreads":2},{"name":"large","type":"external","queue":"DCQ1"}]' |
| config -s tserver.compaction.major.service.cs1.planner=org.apache.accumulo.core.spi.compaction.DefaultCompactionPlanner |
| </code></pre></div></div> |
| |
| <p>The continuous ingest table was configured to use the above compaction service. |
| The table’s compaction ratio was also lowered from the default of 3 to 2. A |
| lower compaction ratio results in less files per Tablet and more compaction |
| work.</p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>config -t ci -s table.compaction.dispatcher=org.apache.accumulo.core.spi.compaction.SimpleCompactionDispatcher |
| config -t ci -s table.compaction.dispatcher.opts.service=cs1 |
| config -t ci -s table.compaction.major.ratio=2 |
| </code></pre></div></div> |
| |
| <p>The Compaction Coordinator was manually started on the Muchos VM where the |
| Accumulo Manager, Zookeeper server, and the Namenode were running. The |
| following command was used to do this.</p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>nohup accumulo compaction-coordinator &gt;/var/data/logs/accumulo/compaction-coordinator.out 2&gt;/var/data/logs/accumulo/compaction-coordinator.err &amp; |
| </code></pre></div></div> |
| |
| <p>To start Compactors, Accumulo’s |
| <a href="https://github.com/apache/accumulo-docker/tree/next-release">docker</a> image was |
| built from the <code class="language-plaintext highlighter-rouge">next-release</code> branch by checking out the Apache Accumulo git |
| repo at commit <a href="https://github.com/apache/accumulo/commit/dad7e01ae7d450064cba5d60a1e0770311ebdb64">dad7e01</a> and building the binary distribution using the |
| command <code class="language-plaintext highlighter-rouge">mvn clean package -DskipTests</code>. The resulting tar file was copied to |
| the accumulo-docker base directory and the image was built using the command:</p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker build --build-arg ACCUMULO_VERSION=2.1.0-SNAPSHOT --build-arg ACCUMULO_FILE=accumulo-2.1.0-SNAPSHOT-bin.tar.gz \ |
| --build-arg HADOOP_FILE=hadoop-3.3.0.tar.gz \ |
| --build-arg ZOOKEEPER_VERSION=3.6.2 --build-arg ZOOKEEPER_FILE=apache-zookeeper-3.6.2-bin.tar.gz \ |
| -t accumulo . |
| </code></pre></div></div> |
| |
| <p>The Docker image was tagged and then pushed to a container registry accessible by |
| Kubernetes. Then the following commands were run to start the Compactors using |
| <a href="/images/blog/202107_ecomp/accumulo-compactor-muchos.yaml">accumulo-compactor-muchos.yaml</a>. |
| The yaml file contains comments explaining issues related to IP addresses and DNS names.</p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl apply -f accumulo-compactor-muchos.yaml |
| kubectl autoscale deployment accumulo-compactor --cpu-percent=80 --min=10 --max=660 |
| </code></pre></div></div> |
| |
| <p>The autoscale command causes Compactors to scale between 10 |
| and 660 pods based on CPU usage. When pods average CPU is above 80%, then |
| pods are added to meet the 80% goal. When it’s below 80%, pods |
| are stopped to meet the 80% goal with 5 minutes between scale down |
| events. This can sometimes lead to running compactions being |
| stopped. During the test there were ~537 dead compactions that were probably |
| caused by this (there were 44K successful external compactions). The max of 660 |
| was chosen based on the number of datanodes in the Muchos cluster. There were |
| 22 datanodes and 30x22=660, so this conceptually sets a limit of 30 external |
| compactions per datanode. This was well tolerated by the Muchos cluster. One |
| important lesson we learned is that external compactions can strain the HDFS |
| DataNodes, so it’s important to consider how many concurrent external |
| compactions will be running. The Muchos cluster had 22x16=352 cores on the |
| worker VMs, so the max of 660 exceeds what the Muchos cluster could run itself.</p> |
| |
| <h3 id="ingesting-data">Ingesting data</h3> |
| |
| <p>After starting Compactors, 22 continuous ingest clients (from |
| accumulo-testing) were started. The following plot shows the number of |
| compactions running in the three different compaction queues |
| configured. The executor cs1_small is for compactions &lt;= 32M and it stayed |
| pretty busy as minor compactions constantly produce new small files. In 2.1.0 |
| merging minor compactions were removed, so it’s important to ensure a |
| compaction queue is properly configured for new small files. The executor |
| cs1_medium was for compactions &gt;32M and &lt;=128M and it was not as busy, but did |
| have steady work. The external compaction queue DCQ1 processed all compactions |
| over 128M and had some spikes of work. These spikes are to be expected with |
| continuous ingest as all Tablets are written to evenly and eventually all of |
| the Tablets need to run large compactions around the same time.</p> |
| |
| <p><img src="/images/blog/202107_ecomp/ci-running.png" alt="Compactions Running" /></p> |
| |
| <p>The following plot shows the number of pods running in Kubernetes. As |
| Compactors used more and less CPU the number of pods automatically scaled up |
| and down.</p> |
| |
| <p><img src="/images/blog/202107_ecomp/ci-pods-running.png" alt="Pods Running" /></p> |
| |
| <p>The following plot shows the number of compactions queued. When the |
| compactions queued for cs1_small spiked above 750, it was adjusted from 4 |
| threads per Tablet Server to 6 threads. This configuration change was made while |
| everything was running and the Tablet Servers saw it and reconfigured their thread |
| pools on the fly.</p> |
| |
| <p><img src="/images/blog/202107_ecomp/ci-queued.png" alt="Pods Queued" /></p> |
| |
| <p>The metrics emitted by Accumulo for these plots had the following names.</p> |
| |
| <ul> |
| <li>TabletServer1.tserver.compactionExecutors.e_DCQ1_queued</li> |
| <li>TabletServer1.tserver.compactionExecutors.e_DCQ1_running</li> |
| <li>TabletServer1.tserver.compactionExecutors.i_cs1_medium_queued</li> |
| <li>TabletServer1.tserver.compactionExecutors.i_cs1_medium_running</li> |
| <li>TabletServer1.tserver.compactionExecutors.i_cs1_small_queued</li> |
| <li>TabletServer1.tserver.compactionExecutors.i_cs1_small_running</li> |
| </ul> |
| |
| <p>Tablet servers emit metrics about queued and running compactions for every |
| compaction executor configured. User can observe these metrics and tune |
| the configuration based on what they see, as was done in this test.</p> |
| |
| <p>The following plot shows the average files per Tablet during the |
| test. The numbers are what would be expected for a compaction ratio of 2 when |
| the system is keeping up with compaction work. Also, animated GIFs were created to |
| show a few tablets <a href="/images/blog/202107_ecomp/files_over_time.html">files over time</a>.</p> |
| |
| <p><img src="/images/blog/202107_ecomp/ci-files-per-tablet.png" alt="Files Per Tablet" /></p> |
| |
| <p>The following is a plot of the number Tablets during the test. |
| Eventually there were 11.28K Tablets around 512 Tablets per Tablet Server. The |
| Tablets were close to splitting again at the end of the test as each Tablet was |
| getting close to 1G.</p> |
| |
| <p><img src="/images/blog/202107_ecomp/ci-online-tablets.png" alt="Online Tablets" /></p> |
| |
| <p>The following plot shows ingest rate over time. The rate goes down as the |
| number of Tablets per Tablet Server goes up, this is expected.</p> |
| |
| <p><img src="/images/blog/202107_ecomp/ci-ingest-rate.png" alt="Ingest Rate" /></p> |
| |
| <p>The following plot shows the number of key/values in Accumulo during |
| the test. When ingest was stopped, there were 266 billion key values in the |
| continuous ingest table.</p> |
| |
| <p><img src="/images/blog/202107_ecomp/ci-entries.png" alt="Table Entries" /></p> |
| |
| <h3 id="full-table-compaction">Full table compaction</h3> |
| |
| <p>After stopping ingest and letting things settle, a full table compaction was |
| kicked off. Since all of these compactions would be over 128M, all of them were |
| scheduled on the external queue DCQ1. The two plots below show compactions |
| running and queued for the ~2 hours it took to do the compaction. When the |
| compaction was initiated there were 10 Compactors running in pods. All 11K |
| Tablets were queued for compaction and because the pods were always running |
| high CPU Kubernetes kept adding pods until the max was reached resulting in 660 |
| Compactors running until all the work was done.</p> |
| |
| <p><img src="/images/blog/202107_ecomp/full-table-compaction-queued.png" alt="Full Table Compactions Running" /></p> |
| |
| <p><img src="/images/blog/202107_ecomp/full-table-compaction-running.png" alt="Full Table Compactions Queued" /></p> |
| |
| <h3 id="verification">Verification</h3> |
| |
| <p>After running everything mentioned above, the continuous ingest verification |
| map reduce job was run. This job looks for holes in the linked list produced |
| by continuous ingest which indicate Accumulo lost data. No holes were found. |
| The counts below were emitted by the job. If there were holes a non-zero |
| UNDEFINED count would be present.</p> |
| |
| <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> org.apache.accumulo.testing.continuous.ContinuousVerify$Counts |
| REFERENCED=266225036149 |
| UNREFERENCED=22010637 |
| </code></pre></div></div> |
| |
| <h2 id="hurdles">Hurdles</h2> |
| |
| <h3 id="how-to-scale-up">How to Scale Up</h3> |
| |
| <p>We ran into several issues running the Compactors in Kubernetes. First, we knew |
| that we could use Kubernetes <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">Horizontal Pod Autoscaler</a> (HPA) to scale the |
| Compactors up and down based on load. But the question remained how to do that. |
| Probably the best metric to use for scaling the Compactors is the size of the |
| external compaction queue. Another possible solution is to take the DataNode |
| utilization into account somehow. We found that in scaling up the Compactors |
| based on their CPU usage we could overload DataNodes. Once DataNodes were |
| overwhelmed, Compactors CPU would drop and the number of pods would naturally |
| scale down.</p> |
| |
| <p>To use custom metrics you would need to get the metrics from Accumulo into a |
| metrics store that has a <a href="https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md#custom-metrics-api">metrics adapter</a>. One possible solution, available |
| in Hadoop 3.3.0, is to use Prometheus, the <a href="https://github.com/kubernetes-sigs/prometheus-adapter">Prometheus Adapter</a>, and enable |
| the Hadoop PrometheusMetricsSink added in |
| <a href="https://issues.apache.org/jira/browse/HADOOP-16398">HADOOP-16398</a> to expose the custom queue |
| size metrics. This seemed like the right solution, but it also seemed like a |
| lot of work that was outside the scope of this blog post. Ultimately we decided |
| to take the simplest approach - use the native Kubernetes metrics-server and |
| scale off CPU usage of the Compactors. As you can see in the “Compactions Queued” |
| and “Compactions Running” graphs above from the full table compaction, it took about |
| 45 minutes for Kubernetes to scale up Compactors to the maximum configured (660). Compactors |
| likely would have been scaled up much faster if scaling was done off the queued compactions |
| instead of CPU usage.</p> |
| |
| <h3 id="gracefully-scaling-down">Gracefully Scaling Down</h3> |
| |
| <p>The Kubernetes Pod <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination">termination process</a> provides a mechanism for the user to |
| define a pre-stop hook that will be called before the Pod is terminated. |
| Without this hook Kubernetes sends a SIGTERM to the Pod, followed by a |
| user-defined grace period, then a SIGKILL. For the purposes of this test we did |
| not define a pre-stop hook or a grace period. It’s likely possible to handle |
| this situation more gracefully, but for this test our Compactors were killed |
| and the compaction work lost when the HPA decided to scale down the Compactors. |
| It was a good test of how we handled failed Compactors. Investigation is |
| needed to determine if changes are needed in Accumulo to facilitate graceful |
| scale down.</p> |
| |
| <h3 id="how-to-connect">How to Connect</h3> |
| |
| <p>The other major issue we ran into was connectivity between the Compactors and |
| the other server processes. The Compactor communicates with ZooKeeper and the |
| Compaction Coordinator, both of which were running outside of Kubernetes. There |
| is no common DNS between the Muchos and Kubernetes cluster, but IPs were |
| visible to both. The Compactor connects to ZooKeeper to find the address of the |
| Compaction Coordinator so that it can connect to it and look for work. By |
| default the Accumulo server processes use the hostname as their address which |
| would not work as those names would not resolve inside the Kubernetes cluster. |
| We had to start the Accumulo processes using the <code class="language-plaintext highlighter-rouge">-a</code> argument and set the |
| hostname to the IP address. Solving connectivity issues between components |
| running in Kubernetes and components external to Kubernetes depends on the capabilities |
| available in the environment and the <code class="language-plaintext highlighter-rouge">-a</code> option may be part of the solution.</p> |
| |
| <h2 id="conclusion">Conclusion</h2> |
| |
| <p>In this blog post we introduced the concept and benefits of external |
| compactions, the new server processes and how to configure the compaction |
| service. We deployed a 23-node Accumulo cluster using Muchos with a variable |
| sized Kubernetes cluster that dynamically scaled Compactors on 3 to 100 compute |
| nodes from 10 to 660 instances. We ran continuous ingest on the Accumulo |
| cluster to create compactions that were run both internal and external to the |
| Tablet Server and demonstrated external compactions completing successfully and |
| Compactors being killed.</p> |
| |
| <p>We discussed also running the following test, but did not have time.</p> |
| |
| <ul> |
| <li>Agitating the Compaction Coordinator, Tablet Servers and Compactors while ingest was running.</li> |
| <li>Comparing the impact on queries for internal vs external compactions.</li> |
| <li>Having multiple external compaction queues, each with its own set of autoscaled Compactor pods.</li> |
| <li>Forcing full table compactions while ingest was running.</li> |
| </ul> |
| |
| <p>The test we ran shows that basic functionality works well, it would be nice to |
| stress the feature in other ways though.</p> |
| |
| </description> |
| <pubDate>Thu, 08 Jul 2021 00:00:00 +0000</pubDate> |
| <link>https://accumulo.apache.org/blog/2021/07/08/external-compactions.html</link> |
| <guid isPermaLink="true">https://accumulo.apache.org/blog/2021/07/08/external-compactions.html</guid> |
| |
| |
| <category>blog</category> |
| |
| </item> |
| |
| </channel> |
| </rss> |