Merge branch '2.0'
diff --git a/TESTING.md b/TESTING.md
index 322759e..51d4e16 100644
--- a/TESTING.md
+++ b/TESTING.md
@@ -17,42 +17,41 @@
 
 # Testing Apache Accumulo
 
-This document is meant to serve as a quick reference to the automated test suites included in Apache Accumulo for users
-to run which validate the product and developers to continue to iterate upon to ensure that the product is stable and as
-free of bugs as possible.
-
-The automated testing suite can be categorized as two sets of tests: unit tests and integration tests. These are the
-traditional unit and integrations tests as defined by the Apache Maven [lifecycle][3] phases.
+This document is meant to serve as a quick reference to the automated test suites of Accumulo.
 
 # Unit tests
 
 Unit tests can be run by invoking `mvn test` at the root of the Apache Accumulo source tree.  For more information see
-the [maven-surefire-plugin docs][4].
+the [maven-surefire-plugin docs][surefire].  This command  will run just the unit tests:
 
-The unit tests should run rather quickly (order of minutes for the entire project) and, in nearly all cases, do not
-require any noticable amount of computer resources (the compilation of the files typically exceeds running the tests).
-Maven will automatically generate a report for each unit test run and will give a summary at the end of each Maven
-module for the total run/failed/errored/skipped tests.
+```bash
+mvn clean test -Dspotbugs.skip -DskipITs
+```
 
-The Apache Accumulo developers expect that these tests are always passing on every revision of the code. If this is not
-the case, it is almost certainly in error.
+# SpotBugs (formerly findbugs)
 
-# Integration tests
+[SpotBugs] will run by default when building Accumulo (unless "-Dspotbugs.skip" is used) and does a thorough static code
+analysis of potential bugs.  There is also a security findbugs plugin configured that can be run with this
+command:
 
-Integration tests can be run by invoking `mvn verify` at the root of the Apache Accumulo source tree.  For more
-information see the [maven-failsafe-plugin docs][5].
+```bash
+mvn clean verify -Psec-bugs -DskipTests
+```
 
-The integration tests are medium length tests (order minutes for each test class and order hours for the complete suite)
-but are checking for regressions that were previously seen in the codebase. These tests do require a noticable amount of
-resources, at least another gigabyte of memory over what Maven itself requires. As such, it's recommended to have at
-least 3-4GB of free memory and 10GB of free disk space.
+# Integration Tests
 
-## Test Categories
+The integration tests are medium length tests that check for regressions. These tests do require more memory over what 
+Maven itself requires. As such, it's recommended to have at least 3-4GB of free memory and 10GB of free disk space.
 
 Accumulo uses JUnit Category annotations to categorize certain integration tests based on their runtime requirements.
-Presently there are several different categories:
+The different categories are listed below.  To run a single IT use the following command. This command will run just
+the WriteAheadLogIT:
 
-### SunnyDay (`SunnyDayTests`)
+```bash
+mvn clean verify -Dit.test=WriteAheadLogIT -Dtest=foo -Dspotbugs.skip
+```
+
+## SunnyDay (`SunnyDayTests`)
 
 This test category represents a minimal set of tests chosen to verify the basic
 functionality of Accumulo. These would typically be run prior to submitting a
@@ -60,53 +59,42 @@
 were broken by the change.
 
 These tests will run by default during the `integration-test` lifecycle phase using `mvn verify`.
-To execute only these tests, use `mvn verify -Dfailsafe.groups=org.apache.accumulo.test.categories.SunnyDayTests`
-To execute everything except these tests, use `mvn verify -Dfailsafe.excludedGroups=org.apache.accumulo.test.categories.SunnyDayTests`
+To run all the Sunny day tests, run:
 
-### MiniAccumuloCluster (`MiniClusterOnlyTests`)
+```bash
+mvn clean verify -Psunny
+```
+
+## MiniAccumuloCluster (`MiniClusterOnlyTests`)
 
 These tests use MiniAccumuloCluster (MAC) which is a multi-process "implementation" of Accumulo, managed
 through Java APIs. This MiniAccumuloCluster has the ability to use the local filesystem or Apache Hadoop's
-MiniDFSCluster, as well as starting one to many tablet servers. MiniAccumuloCluster tends to be a very useful tool in
-that it can automatically provide a workable instance that mimics how an actual deployment functions.
+MiniDFSCluster, as well as starting one to many tablet servers. Most tests will be run in the local directory:
 
-The downside of using MiniAccumuloCluster is that a significant portion of each test is now devoted to starting and
-stopping the MiniAccumuloCluster.  While this is a surefire way to isolate tests from interferring with one another, it
-increases the actual runtime of the test by, on average, 10x. Some times the tests require the use of MAC because the
-test is being destructive or some special environment setup (e.g. Kerberos).
+```bash
+$ACCUMULO_HOME/test/target/mini-tests
+```
+
+The downside of using MiniAccumuloCluster is the extra time it takes to start and stop the MAC.
 
 These tests will run by default during the `integration-test` lifecycle phase using `mvn verify`.
-To execute only these tests, use `mvn verify -Dfailsafe.groups=org.apache.accumulo.test.categories.MiniClusterOnlyTests`
-To execute everything except these tests, use `mvn verify -Dfailsafe.excludedGroups=org.apache.accumulo.test.categories.MiniClusterOnlyTests`
+To run all the Mini tests, run:
+```bash
+mvn clean verify -Dspotbugs.skip
+```
 
-### Standalone Cluster (`StandaloneCapableClusterTests`)
+## Standalone Cluster (`StandaloneCapableClusterTests`)
 
-An alternative to the MiniAccumuloCluster for testing, a standalone Accumulo cluster can also be configured for use by
-most tests. This requires a manual step of building and deploying the Accumulo cluster by hand. The build can then be
-configured to use this cluster instead of always starting a MiniAccumuloCluster.  Not all of the integration tests are
-good candidates to run against a standalone Accumulo cluster, these tests will still launch a MiniAccumuloCluster for
-their use.
+A standalone Accumulo cluster can also be configured for use by most tests. Not all of the integration tests are good
+candidates to run against a standalone Accumulo cluster, these tests will still launch a MiniAccumuloCluster for their use.
 
-Use of a standalone cluster can be enabled using system properties on the Maven command line or, more concisely, by
-providing a Java properties file on the Maven command line. The use of a properties file is recommended since it is
-typically a fixed file per standalone cluster you want to run the tests against.
+These tests can be run by providing a system property.  This command will run all tests against a standalone cluster:
 
-These tests will run by default during the `integration-test` lifecycle phase using `mvn verify`.
-To execute only these tests, use `mvn verify -Dfailsafe.groups=org.apache.accumulo.test.categories.StandaloneCapableClusterTests`
-To execute everything except these tests, use `mvn verify -Dfailsafe.excludedGroups=org.apache.accumulo.test.categories.StandaloneCapableClusterTests`
+```bash
+mvn clean verify -Dtest=foo -Daccumulo.it.properties=/home/user/my_cluster.properties -Dfailsafe.groups=org.apache.accumulo.test.categories.StandaloneCapableClusterTests -Dspotbugs.skip
+```
 
-### Performance Tests (`PerformanceTests`)
-
-This category of tests refer to integration tests written specifically to
-exercise expected performance, which may be dependent on the available
-resources of the host machine. Normal integration tests should be capable of
-running anywhere with a lower-bound on available memory.
-
-These tests will run by default during the `integration-test` lifecycle phase using `mvn verify`.
-To execute only these tests, use `mvn verify -Dfailsafe.groups=org.apache.accumulo.test.categories.PerformanceTests`
-To execute everything except these tests, use `mvn verify -Dfailsafe.excludedGroups=org.apache.accumulo.test.categories.PerformanceTests`
-
-## Configuration for Standalone clusters
+### Configuration for Standalone clusters
 
 The following properties can be used to configure a standalone cluster:
 
@@ -130,7 +118,7 @@
 installations, these are just principal/username and password pairs. It is not required to create the users
 in Accumulo -- the provided admin user will be used to create the user accounts in Accumulo when necessary.
 
-Setting 5 users should be sufficient for all of the integration test's purposes. Each property is suffixed
+Setting 2 users should be sufficient for all of the integration test's purposes. Each property is suffixed
 with an integer which groups the keytab or password with the username.
 
 - `accumulo.it.cluster.standalone.users.$x` The principal name
@@ -139,14 +127,11 @@
 
 Each of the above properties can be set on the commandline (-Daccumulo.it.cluster.standalone.principal=root), or the
 collection can be placed into a properties file and referenced using "accumulo.it.cluster.properties". Properties
-specified on the command line override properties set in a file.  For example, the following might be similar to
-what is executed for a standalone cluster.
-
-  `mvn verify -Daccumulo.it.properties=/home/user/my_cluster.properties`
+specified on the command line override properties set in a file.
 
 ## MapReduce job for Integration tests
 
-[ACCUMULO-3871][6] (re)introduced the ability to parallelize the execution of the Integration Test suite by the use
+[ACCUMULO-3871][issue] (re)introduced the ability to parallelize the execution of the Integration Test suite by the use
 of MapReduce/YARN. When a YARN cluster is available, this can drastically reduce the amount of time to run all tests.
 
 To run the tests, you first need a list of the tests. A simple way to get a list, is to scan the accumulo-test jar file for them.
@@ -168,10 +153,9 @@
 # Manual Distributed Testing
 
 Apache Accumulo has a number of tests which are suitable for running against large clusters for hours to days at a time.
-These test suites exist in the [accumulo-testing repo][2].
+These test suites exist in the [accumulo-testing repo][testing].
 
-[2]: https://github.com/apache/accumulo-testing
-[3]: https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html
-[4]: http://maven.apache.org/surefire/maven-surefire-plugin/
-[5]: http://maven.apache.org/surefire/maven-failsafe-plugin/
-[6]: https://issues.apache.org/jira/browse/ACCUMULO-3871
+[testing]: https://github.com/apache/accumulo-testing
+[surefire]: http://maven.apache.org/surefire/maven-surefire-plugin/
+[issue]: https://issues.apache.org/jira/browse/ACCUMULO-3871
+[SpotBugs]: https://spotbugs.github.io/
diff --git a/assemble/pom.xml b/assemble/pom.xml
index ddbd95e..f679e57 100644
--- a/assemble/pom.xml
+++ b/assemble/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo</artifactId>
   <packaging>pom</packaging>
diff --git a/core/pom.xml b/core/pom.xml
index 878ef0a..ccaf005 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-core</artifactId>
   <name>Apache Accumulo Core</name>
diff --git a/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java b/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java
index 7202d0b..59165f1 100644
--- a/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java
+++ b/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java
@@ -18,12 +18,15 @@
 
 import static java.nio.charset.StandardCharsets.UTF_8;
 
+import java.io.File;
+import java.io.IOException;
 import java.net.URL;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
 import java.util.Map;
 import java.util.Properties;
+import java.util.Scanner;
 
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.clientImpl.ClientInfoImpl;
@@ -37,8 +40,11 @@
 
 import com.beust.jcommander.IStringConverter;
 import com.beust.jcommander.Parameter;
+import com.beust.jcommander.ParameterException;
 import com.beust.jcommander.converters.IParameterSplitter;
 
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+
 public class ClientOpts extends Help {
 
   public static class AuthConverter implements IStringConverter<Authorizations> {
@@ -48,26 +54,6 @@
     }
   }
 
-  public static class Password {
-    public byte[] value;
-
-    public Password(String dfault) {
-      value = dfault.getBytes(UTF_8);
-    }
-
-    @Override
-    public String toString() {
-      return new String(value, UTF_8);
-    }
-  }
-
-  public static class PasswordConverter implements IStringConverter<Password> {
-    @Override
-    public Password convert(String value) {
-      return new Password(value);
-    }
-  }
-
   public static class VisibilityConverter implements IStringConverter<ColumnVisibility> {
     @Override
     public ColumnVisibility convert(String value) {
@@ -82,6 +68,78 @@
     }
   }
 
+  public static class PasswordConverter implements IStringConverter<String> {
+    public static final String STDIN = "stdin";
+
+    private enum KeyType {
+      PASS("pass:"), ENV("env:") {
+        @Override
+        String process(String value) {
+          return System.getenv(value);
+        }
+      },
+      FILE("file:") {
+        @SuppressFBWarnings(value = "PATH_TRAVERSAL_IN",
+            justification = "app is run in same security context as user providing the filename")
+        @Override
+        String process(String value) {
+          Scanner scanner = null;
+          try {
+            scanner = new Scanner(new File(value), UTF_8);
+            return scanner.nextLine();
+          } catch (IOException e) {
+            throw new ParameterException(e);
+          } finally {
+            if (scanner != null) {
+              scanner.close();
+            }
+          }
+        }
+      },
+      STDIN(PasswordConverter.STDIN) {
+        @Override
+        public boolean matches(String value) {
+          return prefix.equals(value);
+        }
+
+        @Override
+        public String convert(String value) {
+          // Will check for this later
+          return prefix;
+        }
+      };
+
+      String prefix;
+
+      private KeyType(String prefix) {
+        this.prefix = prefix;
+      }
+
+      public boolean matches(String value) {
+        return value.startsWith(prefix);
+      }
+
+      public String convert(String value) {
+        return process(value.substring(prefix.length()));
+      }
+
+      String process(String value) {
+        return value;
+      }
+    }
+
+    @Override
+    public String convert(String value) {
+      for (KeyType keyType : KeyType.values()) {
+        if (keyType.matches(value)) {
+          return keyType.convert(value);
+        }
+      }
+
+      return value;
+    }
+  }
+
   /**
    * A catch all for older legacy options that have been dropped. Most of them were replaced with
    * accumulo-client.properties in 2.0. Others have been dropped completely.
@@ -98,8 +156,11 @@
   public String principal = null;
 
   @Parameter(names = "--password", converter = PasswordConverter.class,
-      description = "Enter the connection password", password = true)
-  private Password securePassword = null;
+      description = "conection password (can be specified as '<password>', 'pass:<password>',"
+          + " 'file:<local file containing the password>' or 'env:<variable containing"
+          + " the pass>')",
+      password = true)
+  private String securePassword = null;
 
   public AuthenticationToken getToken() {
     return ClientProperty.getAuthenticationToken(getClientProps());
diff --git a/core/src/main/java/org/apache/accumulo/core/cli/ConfigOpts.java b/core/src/main/java/org/apache/accumulo/core/cli/ConfigOpts.java
index b0d627e..ca0a234 100644
--- a/core/src/main/java/org/apache/accumulo/core/cli/ConfigOpts.java
+++ b/core/src/main/java/org/apache/accumulo/core/cli/ConfigOpts.java
@@ -42,12 +42,29 @@
   private String propsPath;
 
   public synchronized String getPropertiesPath() {
-    if (propsPath == null) {
-      propsPath = SiteConfiguration.getAccumuloPropsLocation().getFile();
-    }
     return propsPath;
   }
 
+  // catch all for string based dropped options, including those specific to subclassed extensions
+  // uncomment below if needed
+  // @Parameter(names = {}, hidden=true)
+  private String legacyOpts = null;
+
+  // catch all for boolean dropped options, including those specific to subclassed extensions
+  @Parameter(names = {"-s", "--safemode"}, hidden = true)
+  private boolean legacyOptsBoolean = false;
+
+  // holds information on dealing with dropped options
+  // option -> Message describing replacement option or property
+  private static Map<String,String> LEGACY_OPTION_MSG = new HashMap<>();
+  static {
+    // garbage collector legacy options
+    LEGACY_OPTION_MSG.put("-s", "Replaced by configuration property " + Property.GC_SAFEMODE);
+    LEGACY_OPTION_MSG.put("--safemode",
+        "Replaced by configuration property " + Property.GC_SAFEMODE);
+
+  }
+
   public static class NullSplitter implements IParameterSplitter {
     @Override
     public List<String> split(String value) {
@@ -66,7 +83,9 @@
       justification = "process runs in same security context as admin who provided path")
   public synchronized SiteConfiguration getSiteConfiguration() {
     if (siteConfig == null) {
-      siteConfig = new SiteConfiguration(new File(getPropertiesPath()), getOverrides());
+      String propsPath = getPropertiesPath();
+      siteConfig = (propsPath == null ? SiteConfiguration.fromEnv()
+          : SiteConfiguration.fromFile(new File(propsPath))).withOverrides(getOverrides()).build();
     }
     return siteConfig;
   }
@@ -79,17 +98,17 @@
     Map<String,String> config = new HashMap<>();
     for (String prop : args) {
       String[] propArgs = prop.split("=", 2);
+      String key = propArgs[0].trim();
+      String value;
       if (propArgs.length == 2) {
-        String key = propArgs[0].trim();
-        String value = propArgs[1].trim();
-        if (key.isEmpty() || value.isEmpty()) {
-          throw new IllegalArgumentException("Invalid command line -o option: " + prop);
-        } else {
-          config.put(key, value);
-        }
-      } else {
+        value = propArgs[1].trim();
+      } else { // if a boolean property then it's mere existence assumes true
+        value = Property.isValidBooleanPropertyKey(key) ? "true" : "";
+      }
+      if (key.isEmpty() || value.isEmpty()) {
         throw new IllegalArgumentException("Invalid command line -o option: " + prop);
       }
+      config.put(key, value);
     }
     return config;
   }
@@ -97,6 +116,20 @@
   @Override
   public void parseArgs(String programName, String[] args, Object... others) {
     super.parseArgs(programName, args, others);
+    if (legacyOpts != null || legacyOptsBoolean) {
+      String errMsg = "";
+      for (String option : args) {
+        if (LEGACY_OPTION_MSG.containsKey(option)) {
+          errMsg +=
+              "Option " + option + " has been dropped - " + LEGACY_OPTION_MSG.get(option) + "\n";
+        }
+      }
+      errMsg += "See '-o' property override option";
+      // prints error to console if ran from the command line otherwise there is no way to know that
+      // an error occurred
+      System.err.println(errMsg);
+      throw new IllegalArgumentException(errMsg);
+    }
     if (getOverrides().size() > 0) {
       log.info("The following configuration was set on the command line:");
       for (Map.Entry<String,String> entry : getOverrides().entrySet()) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/MultiTableBatchWriter.java b/core/src/main/java/org/apache/accumulo/core/client/MultiTableBatchWriter.java
index 8ea675d..0d9983e 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/MultiTableBatchWriter.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/MultiTableBatchWriter.java
@@ -21,7 +21,7 @@
  * each table, each has its own memory and network resources. Using this class these resources may
  * be shared among multiple tables.
  */
-public interface MultiTableBatchWriter {
+public interface MultiTableBatchWriter extends AutoCloseable {
 
   /**
    * Returns a BatchWriter for a particular table.
@@ -54,6 +54,7 @@
    *           when queued mutations are unable to be inserted
    *
    */
+  @Override
   void close() throws MutationsRejectedException;
 
   /**
diff --git a/core/src/main/java/org/apache/accumulo/core/client/ZooKeeperInstance.java b/core/src/main/java/org/apache/accumulo/core/client/ZooKeeperInstance.java
index 58ff546..9d6faa0 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ZooKeeperInstance.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ZooKeeperInstance.java
@@ -17,15 +17,12 @@
 package org.apache.accumulo.core.client;
 
 import static com.google.common.base.Preconditions.checkArgument;
-import static java.nio.charset.StandardCharsets.UTF_8;
 
-import java.util.Collections;
 import java.util.List;
 import java.util.Properties;
 import java.util.UUID;
 import java.util.concurrent.TimeUnit;
 
-import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.clientImpl.ClientConfConverter;
 import org.apache.accumulo.core.clientImpl.ClientContext;
@@ -33,7 +30,9 @@
 import org.apache.accumulo.core.clientImpl.InstanceOperationsImpl;
 import org.apache.accumulo.core.conf.ClientProperty;
 import org.apache.accumulo.core.conf.ConfigurationTypeHelper;
-import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.Location;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.LocationType;
+import org.apache.accumulo.core.metadata.schema.TabletsMetadata;
 import org.apache.accumulo.core.singletons.SingletonManager;
 import org.apache.accumulo.core.singletons.SingletonManager.Mode;
 import org.apache.accumulo.core.util.OpTimer;
@@ -123,59 +122,19 @@
   @Override
   public String getInstanceID() {
     if (instanceId == null) {
-      // want the instance id to be stable for the life of this instance object,
-      // so only get it once
-      String instanceNamePath = Constants.ZROOT + Constants.ZINSTANCES + "/" + instanceName;
-      byte[] iidb = zooCache.get(instanceNamePath);
-      if (iidb == null) {
-        throw new RuntimeException(
-            "Instance name " + instanceName + " does not exist in zookeeper. "
-                + "Run \"accumulo org.apache.accumulo.server.util.ListInstances\" to see a list.");
-      }
-      instanceId = new String(iidb, UTF_8);
+      instanceId = ZooUtil.getInstanceID(zooCache, instanceName);
     }
-
-    if (zooCache.get(Constants.ZROOT + "/" + instanceId) == null) {
-      if (instanceName == null)
-        throw new RuntimeException("Instance id " + instanceId + " does not exist in zookeeper");
-      throw new RuntimeException("Instance id " + instanceId + " pointed to by the name "
-          + instanceName + " does not exist in zookeeper");
-    }
-
+    ZooUtil.verifyInstanceId(zooCache, instanceId, instanceName);
     return instanceId;
   }
 
   @Override
   public List<String> getMasterLocations() {
-    String masterLocPath = ZooUtil.getRoot(getInstanceID()) + Constants.ZMASTER_LOCK;
-
-    OpTimer timer = null;
-
-    if (log.isTraceEnabled()) {
-      log.trace("tid={} Looking up master location in zookeeper.", Thread.currentThread().getId());
-      timer = new OpTimer().start();
-    }
-
-    byte[] loc = ZooUtil.getLockData(zooCache, masterLocPath);
-
-    if (timer != null) {
-      timer.stop();
-      log.trace("tid={} Found master at {} in {}", Thread.currentThread().getId(),
-          (loc == null ? "null" : new String(loc, UTF_8)),
-          String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
-    }
-
-    if (loc == null) {
-      return Collections.emptyList();
-    }
-
-    return Collections.singletonList(new String(loc, UTF_8));
+    return ZooUtil.getMasterLocations(zooCache, getInstanceID());
   }
 
   @Override
   public String getRootTabletLocation() {
-    String zRootLocPath = ZooUtil.getRoot(getInstanceID()) + RootTable.ZROOT_TABLET_LOCATION;
-
     OpTimer timer = null;
 
     if (log.isTraceEnabled()) {
@@ -184,20 +143,20 @@
       timer = new OpTimer().start();
     }
 
-    byte[] loc = zooCache.get(zRootLocPath);
+    Location loc =
+        TabletsMetadata.getRootMetadata(ZooUtil.getRoot(getInstanceID()), zooCache).getLocation();
 
     if (timer != null) {
       timer.stop();
-      log.trace("tid={} Found root tablet at {} in {}", Thread.currentThread().getId(),
-          (loc == null ? "null" : new String(loc, UTF_8)),
+      log.trace("tid={} Found root tablet at {} in {}", Thread.currentThread().getId(), loc,
           String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
     }
 
-    if (loc == null) {
+    if (loc == null || loc.getType() != LocationType.CURRENT) {
       return null;
     }
 
-    return new String(loc, UTF_8).split("\\|")[0];
+    return loc.getHostAndPort().toString();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
index 1e11178..4eaf4e0 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
@@ -88,7 +88,7 @@
       try (
           InputStream inputStream =
               DistributedCacheHelper.openCachedFile(cutFileName, CUTFILE_KEY, conf);
-          Scanner in = new Scanner(inputStream, UTF_8.name())) {
+          Scanner in = new Scanner(inputStream, UTF_8)) {
         while (in.hasNextLine()) {
           cutPoints.add(new Text(Base64.getDecoder().decode(in.nextLine())));
         }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/sample/AbstractHashSampler.java b/core/src/main/java/org/apache/accumulo/core/client/sample/AbstractHashSampler.java
index c4c5d63..6a15acc 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/sample/AbstractHashSampler.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/sample/AbstractHashSampler.java
@@ -27,7 +27,6 @@
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.sample.impl.DataoutputHasher;
 
-import com.google.common.collect.ImmutableSet;
 import com.google.common.hash.HashFunction;
 import com.google.common.hash.Hasher;
 import com.google.common.hash.Hashing;
@@ -56,7 +55,7 @@
   private HashFunction hashFunction;
   private int modulus;
 
-  private static final Set<String> VALID_OPTIONS = ImmutableSet.of("hasher", "modulus");
+  private static final Set<String> VALID_OPTIONS = Set.of("hasher", "modulus");
 
   /**
    * Subclasses with options should override this method and return true if the option is valid for
diff --git a/core/src/main/java/org/apache/accumulo/core/client/sample/RowColumnSampler.java b/core/src/main/java/org/apache/accumulo/core/client/sample/RowColumnSampler.java
index bcfe6af..3eabaea 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/sample/RowColumnSampler.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/sample/RowColumnSampler.java
@@ -24,8 +24,6 @@
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
 
-import com.google.common.collect.ImmutableSet;
-
 /**
  * This sampler can hash any subset of a Key's fields. The fields that hashed for the sample are
  * determined by the configuration options passed in {@link #init(SamplerConfiguration)}. The
@@ -69,7 +67,7 @@
   private boolean visibility = true;
 
   private static final Set<String> VALID_OPTIONS =
-      ImmutableSet.of("row", "family", "qualifier", "visibility");
+      Set.of("row", "family", "qualifier", "visibility");
 
   private boolean hashField(SamplerConfiguration config, String field) {
     String optValue = config.getOptions().get(field);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
index 12aade5..4de7ac7 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
@@ -63,7 +63,7 @@
         byte[] tokenBytes) {
       T type = null;
       try {
-        type = tokenType.newInstance();
+        type = tokenType.getDeclaredConstructor().newInstance();
       } catch (Exception e) {
         throw new IllegalArgumentException("Cannot instantiate " + tokenType.getName(), e);
       }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/CredentialProviderToken.java b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/CredentialProviderToken.java
index 65f82ca..0547aa4 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/CredentialProviderToken.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/CredentialProviderToken.java
@@ -23,7 +23,7 @@
 import java.util.LinkedHashSet;
 import java.util.Set;
 
-import org.apache.accumulo.core.conf.CredentialProviderFactoryShim;
+import org.apache.accumulo.core.conf.HadoopCredentialProvider;
 import org.apache.hadoop.conf.Configuration;
 
 /**
@@ -51,9 +51,9 @@
     this.name = name;
     this.credentialProviders = credentialProviders;
     final Configuration conf = new Configuration();
-    conf.set(CredentialProviderFactoryShim.CREDENTIAL_PROVIDER_PATH, credentialProviders);
+    HadoopCredentialProvider.setPath(conf, credentialProviders);
 
-    char[] password = CredentialProviderFactoryShim.getValueFromCredentialProvider(conf, name);
+    char[] password = HadoopCredentialProvider.getValue(conf, name);
 
     if (password == null) {
       throw new IOException(
diff --git a/core/src/main/java/org/apache/accumulo/core/client/summary/SummarizerConfiguration.java b/core/src/main/java/org/apache/accumulo/core/client/summary/SummarizerConfiguration.java
index acc7ab3..4e4c905 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/summary/SummarizerConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/summary/SummarizerConfiguration.java
@@ -47,7 +47,7 @@
 
   private SummarizerConfiguration(String className, String configId, Map<String,String> options) {
     this.className = className;
-    this.options = ImmutableMap.copyOf(options);
+    this.options = Map.copyOf(options);
 
     if (configId == null) {
       ArrayList<String> keys = new ArrayList<>(this.options.keySet());
diff --git a/core/src/main/java/org/apache/accumulo/core/client/summary/Summary.java b/core/src/main/java/org/apache/accumulo/core/client/summary/Summary.java
index be8c771..a9893e6 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/summary/Summary.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/summary/Summary.java
@@ -19,8 +19,6 @@
 
 import java.util.Map;
 
-import com.google.common.collect.ImmutableMap;
-
 /**
  * This class encapsulates summary statistics, information about how those statistics were
  * generated, and information about files the statistics were obtained from.
@@ -117,13 +115,13 @@
     }
   }
 
-  private final ImmutableMap<String,Long> statistics;
+  private final Map<String,Long> statistics;
   private final SummarizerConfiguration config;
   private final FileStatistics fileStats;
 
   public Summary(Map<String,Long> summary, SummarizerConfiguration config, long totalFiles,
       long filesMissingSummary, long filesWithExtra, long filesWithLarge, long deletedFiles) {
-    this.statistics = ImmutableMap.copyOf(summary);
+    this.statistics = Map.copyOf(summary);
     this.config = config;
     this.fileStats = new FileStatistics(totalFiles, filesMissingSummary, filesWithExtra,
         filesWithLarge, deletedFiles);
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/ClientConfConverter.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/ClientConfConverter.java
index 0550c72..ca012cb 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/ClientConfConverter.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/ClientConfConverter.java
@@ -25,8 +25,8 @@
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.ClientProperty;
-import org.apache.accumulo.core.conf.CredentialProviderFactoryShim;
 import org.apache.accumulo.core.conf.DefaultConfiguration;
+import org.apache.accumulo.core.conf.HadoopCredentialProvider;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.rpc.SaslConnectionParams;
 import org.apache.hadoop.security.authentication.util.KerberosName;
@@ -165,8 +165,7 @@
         if (property.isSensitive()) {
           org.apache.hadoop.conf.Configuration hadoopConf = getHadoopConfiguration();
           if (hadoopConf != null) {
-            char[] value =
-                CredentialProviderFactoryShim.getValueFromCredentialProvider(hadoopConf, key);
+            char[] value = HadoopCredentialProvider.getValue(hadoopConf, key);
             if (value != null) {
               log.trace("Loaded sensitive value for {} from CredentialProvider", key);
               return new String(value);
@@ -177,9 +176,9 @@
           }
         }
 
-        if (config.containsKey(key))
+        if (config.containsKey(key)) {
           return config.getString(key);
-        else {
+        } else {
           // Reconstitute the server kerberos property from the client config
           if (property == Property.GENERAL_KERBEROS_PRINCIPAL) {
             if (config.containsKey(
@@ -203,8 +202,9 @@
         Iterator<String> keyIter = config.getKeys();
         while (keyIter.hasNext()) {
           String key = keyIter.next();
-          if (filter.test(key))
+          if (filter.test(key)) {
             props.put(key, config.getString(key));
+          }
         }
 
         // Two client props that don't exist on the server config. Client doesn't need to know about
@@ -226,13 +226,12 @@
         // Attempt to load sensitive properties from a CredentialProvider, if configured
         org.apache.hadoop.conf.Configuration hadoopConf = getHadoopConfiguration();
         if (hadoopConf != null) {
-          for (String key : CredentialProviderFactoryShim.getKeys(hadoopConf)) {
+          for (String key : HadoopCredentialProvider.getKeys(hadoopConf)) {
             if (!Property.isValidPropertyKey(key) || !Property.isSensitive(key)) {
               continue;
             }
             if (filter.test(key)) {
-              char[] value =
-                  CredentialProviderFactoryShim.getValueFromCredentialProvider(hadoopConf, key);
+              char[] value = HadoopCredentialProvider.getValue(hadoopConf, key);
               if (value != null) {
                 props.put(key, new String(value));
               }
@@ -246,7 +245,7 @@
             config.getString(Property.GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS.getKey());
         if (credProviderPaths != null && !credProviderPaths.isEmpty()) {
           org.apache.hadoop.conf.Configuration hConf = new org.apache.hadoop.conf.Configuration();
-          hConf.set(CredentialProviderFactoryShim.CREDENTIAL_PROVIDER_PATH, credProviderPaths);
+          HadoopCredentialProvider.setPath(hConf, credProviderPaths);
           return hConf;
         }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/ClientContext.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/ClientContext.java
index 01474b2..abaf282 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/ClientContext.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/ClientContext.java
@@ -17,10 +17,9 @@
 package org.apache.accumulo.core.clientImpl;
 
 import static com.google.common.base.Preconditions.checkArgument;
-import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LOCATION;
 
 import java.nio.file.Path;
-import java.util.Collections;
 import java.util.List;
 import java.util.Objects;
 import java.util.Properties;
@@ -28,7 +27,6 @@
 import java.util.function.Function;
 import java.util.function.Supplier;
 
-import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -55,6 +53,10 @@
 import org.apache.accumulo.core.data.TableId;
 import org.apache.accumulo.core.master.state.tables.TableState;
 import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.Ample;
+import org.apache.accumulo.core.metadata.schema.AmpleImpl;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.Location;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.LocationType;
 import org.apache.accumulo.core.rpc.SaslConnectionParams;
 import org.apache.accumulo.core.rpc.SslConnectionParams;
 import org.apache.accumulo.core.security.Authorizations;
@@ -200,6 +202,11 @@
     };
   }
 
+  public Ample getAmple() {
+    ensureOpen();
+    return new AmpleImpl(this);
+  }
+
   /**
    * Retrieve the credentials used to construct this context
    */
@@ -325,7 +332,6 @@
    */
   public String getRootTabletLocation() {
     ensureOpen();
-    String zRootLocPath = getZooKeeperRoot() + RootTable.ZROOT_TABLET_LOCATION;
 
     OpTimer timer = null;
 
@@ -335,20 +341,19 @@
       timer = new OpTimer().start();
     }
 
-    byte[] loc = zooCache.get(zRootLocPath);
+    Location loc = getAmple().readTablet(RootTable.EXTENT, LOCATION).getLocation();
 
     if (timer != null) {
       timer.stop();
-      log.trace("tid={} Found root tablet at {} in {}", Thread.currentThread().getId(),
-          (loc == null ? "null" : new String(loc, UTF_8)),
+      log.trace("tid={} Found root tablet at {} in {}", Thread.currentThread().getId(), loc,
           String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
     }
 
-    if (loc == null) {
+    if (loc == null || loc.getType() != LocationType.CURRENT) {
       return null;
     }
 
-    return new String(loc, UTF_8).split("\\|")[0];
+    return loc.getHostAndPort().toString();
   }
 
   /**
@@ -358,29 +363,7 @@
    */
   public List<String> getMasterLocations() {
     ensureOpen();
-    String masterLocPath = getZooKeeperRoot() + Constants.ZMASTER_LOCK;
-
-    OpTimer timer = null;
-
-    if (log.isTraceEnabled()) {
-      log.trace("tid={} Looking up master location in zookeeper.", Thread.currentThread().getId());
-      timer = new OpTimer().start();
-    }
-
-    byte[] loc = ZooUtil.getLockData(zooCache, masterLocPath);
-
-    if (timer != null) {
-      timer.stop();
-      log.trace("tid={} Found master at {} in {}", Thread.currentThread().getId(),
-          (loc == null ? "null" : new String(loc, UTF_8)),
-          String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
-    }
-
-    if (loc == null) {
-      return Collections.emptyList();
-    }
-
-    return Collections.singletonList(new String(loc, UTF_8));
+    return ZooUtil.getMasterLocations(zooCache, getInstanceID());
   }
 
   /**
@@ -392,25 +375,9 @@
     ensureOpen();
     final String instanceName = info.getInstanceName();
     if (instanceId == null) {
-      // want the instance id to be stable for the life of this instance object,
-      // so only get it once
-      String instanceNamePath = Constants.ZROOT + Constants.ZINSTANCES + "/" + instanceName;
-      byte[] iidb = zooCache.get(instanceNamePath);
-      if (iidb == null) {
-        throw new RuntimeException(
-            "Instance name " + instanceName + " does not exist in zookeeper. "
-                + "Run \"accumulo org.apache.accumulo.server.util.ListInstances\" to see a list.");
-      }
-      instanceId = new String(iidb, UTF_8);
+      instanceId = ZooUtil.getInstanceID(zooCache, instanceName);
     }
-
-    if (zooCache.get(Constants.ZROOT + "/" + instanceId) == null) {
-      if (instanceName == null)
-        throw new RuntimeException("Instance id " + instanceId + " does not exist in zookeeper");
-      throw new RuntimeException("Instance id " + instanceId + " pointed to by the name "
-          + instanceName + " does not exist in zookeeper");
-    }
-
+    ZooUtil.verifyInstanceId(zooCache, instanceId, instanceName);
     return instanceId;
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/ConditionalWriterImpl.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/ConditionalWriterImpl.java
index 4d2deb6..cd8d24c 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/ConditionalWriterImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/ConditionalWriterImpl.java
@@ -30,7 +30,6 @@
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
-import java.util.Map.Entry;
 import java.util.NoSuchElementException;
 import java.util.concurrent.BlockingQueue;
 import java.util.concurrent.DelayQueue;
@@ -283,8 +282,7 @@
         failedMutations.addAll(mutations2);
 
     } else {
-      for (QCMutation qcm : mutations)
-        qcm.resetDelay();
+      mutations.forEach(QCMutation::resetDelay);
       failedMutations.addAll(mutations);
     }
   }
@@ -303,8 +301,7 @@
           throw new TableOfflineException(Tables.getTableOfflineMsg(context, tableId));
 
     } catch (Exception e) {
-      for (QCMutation qcm : mutations)
-        qcm.queueResult(new Result(e, qcm, null));
+      mutations.forEach(qcm -> qcm.queueResult(new Result(e, qcm, null)));
 
       // do not want to queue anything that was put in before binMutations() failed
       failures.clear();
@@ -314,10 +311,7 @@
     if (failures.size() > 0)
       queueRetry(failures, null);
 
-    for (Entry<String,TabletServerMutations<QCMutation>> entry : binnedMutations.entrySet()) {
-      queue(entry.getKey(), entry.getValue());
-    }
-
+    binnedMutations.forEach(this::queue);
   }
 
   private void queue(String location, TabletServerMutations<QCMutation> mutations) {
@@ -342,8 +336,7 @@
     // this code reschedules the the server for processing later... there may be other queues with
     // more data that need to be processed... also it will give the current server time to build
     // up more data... the thinking is that rescheduling instead or processing immediately will
-    // result
-    // in bigger batches and less RPC overhead
+    // result in bigger batches and less RPC overhead
 
     synchronized (serverQueue) {
       if (serverQueue.queue.size() > 0)
@@ -355,9 +348,9 @@
   }
 
   private TabletServerMutations<QCMutation> dequeue(String location) {
-    BlockingQueue<TabletServerMutations<QCMutation>> queue = getServerQueue(location).queue;
+    var queue = getServerQueue(location).queue;
 
-    ArrayList<TabletServerMutations<QCMutation>> mutations = new ArrayList<>();
+    var mutations = new ArrayList<TabletServerMutations<QCMutation>>();
     queue.drainTo(mutations);
 
     if (mutations.size() == 0)
@@ -370,10 +363,8 @@
       TabletServerMutations<QCMutation> tsm = mutations.get(0);
 
       for (int i = 1; i < mutations.size(); i++) {
-        for (Entry<KeyExtent,List<QCMutation>> entry : mutations.get(i).getMutations().entrySet()) {
-          tsm.getMutations().computeIfAbsent(entry.getKey(), k -> new ArrayList<>())
-              .addAll(entry.getValue());
-        }
+        mutations.get(i).getMutations().forEach((keyExtent, mutationList) -> tsm.getMutations()
+            .computeIfAbsent(keyExtent, k -> new ArrayList<>()).addAll(mutationList));
       }
 
       return tsm;
@@ -438,7 +429,6 @@
     queue(mutationList);
 
     return new RQIterator(resultQueue, count);
-
   }
 
   private class SendTask implements Runnable {
@@ -618,9 +608,6 @@
           new AccumuloSecurityException(context.getCredentials().getPrincipal(), tse.getCode(),
               Tables.getPrintableTableInfoFromId(context, tableId), tse);
       queueException(location, cmidToCm, ase);
-    } catch (TTransportException e) {
-      locator.invalidateCache(context, location.toString());
-      invalidateSession(location, cmidToCm, sessionId);
     } catch (TApplicationException tae) {
       queueException(location, cmidToCm, new AccumuloServerException(location.toString(), tae));
     } catch (TException e) {
@@ -741,25 +728,22 @@
       MutableLong cmid, Map<TKeyExtent,List<TConditionalMutation>> tmutations,
       CompressedIterators compressedIters) {
 
-    for (Entry<KeyExtent,List<QCMutation>> entry : mutations.getMutations().entrySet()) {
-      TKeyExtent tke = entry.getKey().toThrift();
-      ArrayList<TConditionalMutation> tcondMutaions = new ArrayList<>();
+    mutations.getMutations().forEach((keyExtent, mutationList) -> {
+      var tcondMutaions = new ArrayList<TConditionalMutation>();
 
-      List<QCMutation> condMutations = entry.getValue();
-
-      for (QCMutation cm : condMutations) {
+      for (var cm : mutationList) {
         TMutation tm = cm.toThrift();
 
         List<TCondition> conditions = convertConditions(cm, compressedIters);
 
-        cmidToCm.put(cmid.longValue(), new CMK(entry.getKey(), cm));
+        cmidToCm.put(cmid.longValue(), new CMK(keyExtent, cm));
         TConditionalMutation tcm = new TConditionalMutation(conditions, tm, cmid.longValue());
         cmid.increment();
         tcondMutaions.add(tcm);
       }
 
-      tmutations.put(tke, tcondMutaions);
-    }
+      tmutations.put(keyExtent.toThrift(), tcondMutaions);
+    });
   }
 
   private static final Comparator<Long> TIMESTAMP_COMPARATOR =
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/InstanceOperationsImpl.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/InstanceOperationsImpl.java
index a452ce4..4456aee 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/InstanceOperationsImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/InstanceOperationsImpl.java
@@ -18,6 +18,10 @@
 
 import static com.google.common.base.Preconditions.checkArgument;
 import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.apache.accumulo.core.rpc.ThriftUtil.createClient;
+import static org.apache.accumulo.core.rpc.ThriftUtil.createTransport;
+import static org.apache.accumulo.core.rpc.ThriftUtil.getTServerClient;
+import static org.apache.accumulo.core.rpc.ThriftUtil.returnClient;
 
 import java.util.ArrayList;
 import java.util.Collections;
@@ -34,7 +38,6 @@
 import org.apache.accumulo.core.client.admin.InstanceOperations;
 import org.apache.accumulo.core.clientImpl.thrift.ConfigurationType;
 import org.apache.accumulo.core.clientImpl.thrift.ThriftSecurityException;
-import org.apache.accumulo.core.rpc.ThriftUtil;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService.Client;
 import org.apache.accumulo.core.trace.TraceUtil;
@@ -45,7 +48,6 @@
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.thrift.TException;
 import org.apache.thrift.transport.TTransport;
-import org.apache.thrift.transport.TTransportException;
 import org.slf4j.LoggerFactory;
 
 /**
@@ -112,11 +114,11 @@
     String path = context.getZooKeeperRoot() + Constants.ZTSERVERS;
     List<String> results = new ArrayList<>();
     for (String candidate : cache.getChildren(path)) {
-      List<String> children = cache.getChildren(path + "/" + candidate);
+      var children = cache.getChildren(path + "/" + candidate);
       if (children != null && children.size() > 0) {
-        List<String> copy = new ArrayList<>(children);
+        var copy = new ArrayList<>(children);
         Collections.sort(copy);
-        byte[] data = cache.get(path + "/" + candidate + "/" + copy.get(0));
+        var data = cache.get(path + "/" + candidate + "/" + copy.get(0));
         if (data != null && !"master".equals(new String(data, UTF_8))) {
           results.add(candidate);
         }
@@ -128,14 +130,13 @@
   @Override
   public List<ActiveScan> getActiveScans(String tserver)
       throws AccumuloException, AccumuloSecurityException {
-    final HostAndPort parsedTserver = HostAndPort.fromString(tserver);
+    final var parsedTserver = HostAndPort.fromString(tserver);
     Client client = null;
     try {
-      client = ThriftUtil.getTServerClient(parsedTserver, context);
+      client = getTServerClient(parsedTserver, context);
 
       List<ActiveScan> as = new ArrayList<>();
-      for (org.apache.accumulo.core.tabletserver.thrift.ActiveScan activeScan : client
-          .getActiveScans(TraceUtil.traceInfo(), context.rpcCreds())) {
+      for (var activeScan : client.getActiveScans(TraceUtil.traceInfo(), context.rpcCreds())) {
         try {
           as.add(new ActiveScanImpl(context, activeScan));
         } catch (TableNotFoundException e) {
@@ -143,15 +144,13 @@
         }
       }
       return as;
-    } catch (TTransportException e) {
-      throw new AccumuloException(e);
     } catch (ThriftSecurityException e) {
       throw new AccumuloSecurityException(e.user, e.code, e);
     } catch (TException e) {
       throw new AccumuloException(e);
     } finally {
       if (client != null)
-        ThriftUtil.returnClient(client);
+        returnClient(client);
     }
   }
 
@@ -165,26 +164,23 @@
   @Override
   public List<ActiveCompaction> getActiveCompactions(String tserver)
       throws AccumuloException, AccumuloSecurityException {
-    final HostAndPort parsedTserver = HostAndPort.fromString(tserver);
+    final var parsedTserver = HostAndPort.fromString(tserver);
     Client client = null;
     try {
-      client = ThriftUtil.getTServerClient(parsedTserver, context);
+      client = getTServerClient(parsedTserver, context);
 
       List<ActiveCompaction> as = new ArrayList<>();
-      for (org.apache.accumulo.core.tabletserver.thrift.ActiveCompaction activeCompaction : client
-          .getActiveCompactions(TraceUtil.traceInfo(), context.rpcCreds())) {
-        as.add(new ActiveCompactionImpl(context, activeCompaction));
+      for (var tac : client.getActiveCompactions(TraceUtil.traceInfo(), context.rpcCreds())) {
+        as.add(new ActiveCompactionImpl(context, tac));
       }
       return as;
-    } catch (TTransportException e) {
-      throw new AccumuloException(e);
     } catch (ThriftSecurityException e) {
       throw new AccumuloSecurityException(e.user, e.code, e);
     } catch (TException e) {
       throw new AccumuloException(e);
     } finally {
       if (client != null)
-        ThriftUtil.returnClient(client);
+        returnClient(client);
     }
   }
 
@@ -192,9 +188,8 @@
   public void ping(String tserver) throws AccumuloException {
     TTransport transport = null;
     try {
-      transport = ThriftUtil.createTransport(AddressUtil.parseAddress(tserver, false), context);
-      TabletClientService.Client client =
-          ThriftUtil.createClient(new TabletClientService.Client.Factory(), transport);
+      transport = createTransport(AddressUtil.parseAddress(tserver, false), context);
+      var client = createClient(new TabletClientService.Client.Factory(), transport);
       client.getTabletServerStatus(TraceUtil.traceInfo(), context.rpcCreds());
     } catch (TException e) {
       throw new AccumuloException(e);
@@ -223,9 +218,8 @@
     checkArgument(zooCache != null, "zooCache is null");
     checkArgument(instanceId != null, "instanceId is null");
     for (String name : zooCache.getChildren(Constants.ZROOT + Constants.ZINSTANCES)) {
-      String instanceNamePath = Constants.ZROOT + Constants.ZINSTANCES + "/" + name;
-      byte[] bytes = zooCache.get(instanceNamePath);
-      UUID iid = UUID.fromString(new String(bytes, UTF_8));
+      var bytes = zooCache.get(Constants.ZROOT + Constants.ZINSTANCES + "/" + name);
+      var iid = UUID.fromString(new String(bytes, UTF_8));
       if (iid.equals(instanceId)) {
         return name;
       }
@@ -235,7 +229,6 @@
 
   @Override
   public String getInstanceID() {
-
     return context.getInstanceID();
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/MultiTableBatchWriterImpl.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/MultiTableBatchWriterImpl.java
index 0f5b973..068b41e 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/MultiTableBatchWriterImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/MultiTableBatchWriterImpl.java
@@ -18,6 +18,7 @@
 
 import static com.google.common.base.Preconditions.checkArgument;
 
+import java.lang.ref.Cleaner.Cleanable;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.atomic.AtomicBoolean;
 
@@ -29,6 +30,7 @@
 import org.apache.accumulo.core.client.TableOfflineException;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.TableId;
+import org.apache.accumulo.core.util.cleaner.CleanerUtil;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -37,13 +39,12 @@
 public class MultiTableBatchWriterImpl implements MultiTableBatchWriter {
 
   private static final Logger log = LoggerFactory.getLogger(MultiTableBatchWriterImpl.class);
-  private AtomicBoolean closed;
 
   private class TableBatchWriter implements BatchWriter {
 
-    private TableId tableId;
+    private final TableId tableId;
 
-    TableBatchWriter(TableId tableId) {
+    private TableBatchWriter(TableId tableId) {
       this.tableId = tableId;
     }
 
@@ -69,47 +70,34 @@
       throw new UnsupportedOperationException(
           "Must flush all tables, can not flush an individual table");
     }
-
   }
 
-  private TabletServerBatchWriter bw;
-  private ConcurrentHashMap<TableId,BatchWriter> tableWriters;
+  private final ConcurrentHashMap<TableId,BatchWriter> tableWriters = new ConcurrentHashMap<>();
+  private final AtomicBoolean closed = new AtomicBoolean(false);
   private final ClientContext context;
+  private final TabletServerBatchWriter bw;
+  private final Cleanable cleanable;
 
-  public MultiTableBatchWriterImpl(ClientContext context, BatchWriterConfig config) {
+  MultiTableBatchWriterImpl(ClientContext context, BatchWriterConfig config) {
     checkArgument(context != null, "context is null");
     checkArgument(config != null, "config is null");
     this.context = context;
     this.bw = new TabletServerBatchWriter(context, config);
-    tableWriters = new ConcurrentHashMap<>();
-    this.closed = new AtomicBoolean(false);
+    this.cleanable = CleanerUtil.unclosed(this, MultiTableBatchWriter.class, closed, log, bw);
   }
 
   @Override
   public boolean isClosed() {
-    return this.closed.get();
+    return closed.get();
   }
 
   @Override
   public void close() throws MutationsRejectedException {
-    this.closed.set(true);
-    bw.close();
-  }
-
-  // WARNING: do not rely upon finalize to close this class. Finalize is not guaranteed to be
-  // called.
-  @Override
-  protected void finalize() {
-    if (!closed.get()) {
-      log.warn("{} not shutdown; did you forget to call close()?",
-          MultiTableBatchWriterImpl.class.getSimpleName());
-      try {
-        close();
-      } catch (MutationsRejectedException mre) {
-        log.error(MultiTableBatchWriterImpl.class.getSimpleName() + " internal error.", mre);
-        throw new RuntimeException(
-            "Exception when closing " + MultiTableBatchWriterImpl.class.getSimpleName(), mre);
-      }
+    if (closed.compareAndSet(false, true)) {
+      // deregister cleanable, but it won't run because it checks
+      // the value of closed first, which is now true
+      cleanable.clean();
+      bw.close();
     }
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/OfflineIterator.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/OfflineIterator.java
index 9abe346..38394da 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/OfflineIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/OfflineIterator.java
@@ -16,6 +16,9 @@
  */
 package org.apache.accumulo.core.clientImpl;
 
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.FILES;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LOCATION;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.PREV_ROW;
 import static org.apache.accumulo.fate.util.UtilWaitThread.sleepUninterruptibly;
 
 import java.io.IOException;
@@ -286,7 +289,7 @@
 
   private TabletMetadata getTabletFiles(Range nextRange) {
     try (TabletsMetadata tablets = TabletsMetadata.builder().scanMetadataTable()
-        .overRange(nextRange).fetchFiles().fetchLocation().fetchPrev().build(context)) {
+        .overRange(nextRange).fetch(FILES, LOCATION, PREV_ROW).build(context)) {
       return tablets.iterator().next();
     }
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/RootTabletLocator.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/RootTabletLocator.java
index e4e9d2e..1797ec3 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/RootTabletLocator.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/RootTabletLocator.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.core.clientImpl;
 
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LOCATION;
 import static org.apache.accumulo.fate.util.UtilWaitThread.sleepUninterruptibly;
 
 import java.util.Collection;
@@ -30,6 +31,8 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.dataImpl.KeyExtent;
 import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.Location;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.LocationType;
 import org.apache.accumulo.core.util.OpTimer;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.hadoop.io.Text;
@@ -91,9 +94,6 @@
   public void invalidateCache() {}
 
   protected TabletLocation getRootTabletLocation(ClientContext context) {
-    String zRootLocPath = context.getZooKeeperRoot() + RootTable.ZROOT_TABLET_LOCATION;
-    ZooCache zooCache = context.getZooCache();
-
     Logger log = LoggerFactory.getLogger(this.getClass());
 
     OpTimer timer = null;
@@ -104,23 +104,22 @@
       timer = new OpTimer().start();
     }
 
-    byte[] loc = zooCache.get(zRootLocPath);
+    Location loc = context.getAmple().readTablet(RootTable.EXTENT, LOCATION).getLocation();
 
     if (timer != null) {
       timer.stop();
-      log.trace("tid={} Found root tablet at {} in {}", Thread.currentThread().getId(),
-          (loc == null ? "null" : new String(loc)),
+      log.trace("tid={} Found root tablet at {} in {}", Thread.currentThread().getId(), loc,
           String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
     }
 
-    if (loc == null) {
+    if (loc == null || loc.getType() != LocationType.CURRENT) {
       return null;
     }
 
-    String[] tokens = new String(loc).split("\\|");
+    String server = loc.getHostAndPort().toString();
 
-    if (lockChecker.isLockHeld(tokens[0], tokens[1]))
-      return new TabletLocation(RootTable.EXTENT, tokens[0], tokens[1]);
+    if (lockChecker.isLockHeld(server, loc.getSession()))
+      return new TabletLocation(RootTable.EXTENT, server, loc.getSession());
     else
       return null;
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java
index d578bdc..f2e6af8 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java
@@ -43,8 +43,6 @@
 import org.apache.accumulo.core.util.TextUtil;
 import org.apache.hadoop.io.Text;
 
-import com.google.common.collect.ImmutableMap;
-
 public class ScannerOptions implements ScannerBase {
 
   protected List<IterInfo> serverSideIteratorList = Collections.emptyList();
@@ -73,21 +71,25 @@
   @Override
   public synchronized void addScanIterator(IteratorSetting si) {
     checkArgument(si != null, "si is null");
-    if (serverSideIteratorList.size() == 0)
+    if (serverSideIteratorList.size() == 0) {
       serverSideIteratorList = new ArrayList<>();
+    }
 
     for (IterInfo ii : serverSideIteratorList) {
-      if (ii.iterName.equals(si.getName()))
+      if (ii.iterName.equals(si.getName())) {
         throw new IllegalArgumentException("Iterator name is already in use " + si.getName());
-      if (ii.getPriority() == si.getPriority())
+      }
+      if (ii.getPriority() == si.getPriority()) {
         throw new IllegalArgumentException(
             "Iterator priority is already in use " + si.getPriority());
+      }
     }
 
     serverSideIteratorList.add(new IterInfo(si.getPriority(), si.getIteratorClass(), si.getName()));
 
-    if (serverSideIteratorOptions.size() == 0)
+    if (serverSideIteratorOptions.size() == 0) {
       serverSideIteratorOptions = new HashMap<>();
+    }
 
     Map<String,String> opts = serverSideIteratorOptions.get(si.getName());
 
@@ -102,8 +104,9 @@
   public synchronized void removeScanIterator(String iteratorName) {
     checkArgument(iteratorName != null, "iteratorName is null");
     // if no iterators are set, we don't have it, so it is already removed
-    if (serverSideIteratorList.size() == 0)
+    if (serverSideIteratorList.size() == 0) {
       return;
+    }
 
     for (IterInfo ii : serverSideIteratorList) {
       if (ii.iterName.equals(iteratorName)) {
@@ -120,8 +123,9 @@
     checkArgument(iteratorName != null, "iteratorName is null");
     checkArgument(key != null, "key is null");
     checkArgument(value != null, "value is null");
-    if (serverSideIteratorOptions.size() == 0)
+    if (serverSideIteratorOptions.size() == 0) {
       serverSideIteratorOptions = new HashMap<>();
+    }
 
     Map<String,String> opts = serverSideIteratorOptions.get(iteratorName);
 
@@ -179,8 +183,9 @@
 
         dst.serverSideIteratorOptions = new HashMap<>();
         Set<Entry<String,Map<String,String>>> es = src.serverSideIteratorOptions.entrySet();
-        for (Entry<String,Map<String,String>> entry : es)
+        for (Entry<String,Map<String,String>> entry : es) {
           dst.serverSideIteratorOptions.put(entry.getKey(), new HashMap<>(entry.getValue()));
+        }
 
         dst.samplerConfig = src.samplerConfig;
         dst.batchTimeOut = src.batchTimeOut;
@@ -202,10 +207,11 @@
       throw new IllegalArgumentException("TimeOut must be positive : " + timeOut);
     }
 
-    if (timeout == 0)
+    if (timeout == 0) {
       this.timeOut = Long.MAX_VALUE;
-    else
+    } else {
       this.timeOut = timeUnit.toMillis(timeout);
+    }
   }
 
   @Override
@@ -274,7 +280,7 @@
 
   @Override
   public synchronized void setExecutionHints(Map<String,String> hints) {
-    this.executionHints = ImmutableMap.copyOf(Objects.requireNonNull(hints));
+    this.executionHints = Map.copyOf(Objects.requireNonNull(hints));
   }
 
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/TableMap.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/TableMap.java
index 0d1850b..43f665b 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/TableMap.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/TableMap.java
@@ -53,8 +53,8 @@
 
     List<String> tableIds = zooCache.getChildren(context.getZooKeeperRoot() + Constants.ZTABLES);
     Map<NamespaceId,String> namespaceIdToNameMap = new HashMap<>();
-    ImmutableMap.Builder<String,TableId> tableNameToIdBuilder = new ImmutableMap.Builder<>();
-    ImmutableMap.Builder<TableId,String> tableIdToNameBuilder = new ImmutableMap.Builder<>();
+    var tableNameToIdBuilder = ImmutableMap.<String,TableId>builder();
+    var tableIdToNameBuilder = ImmutableMap.<TableId,String>builder();
 
     // use StringBuilder to construct zPath string efficiently across many tables
     StringBuilder zPathBuilder = new StringBuilder();
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/TableOperationsImpl.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/TableOperationsImpl.java
index 762b85c..a72e5da 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/TableOperationsImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/TableOperationsImpl.java
@@ -22,6 +22,8 @@
 import static java.util.concurrent.TimeUnit.MILLISECONDS;
 import static java.util.concurrent.TimeUnit.SECONDS;
 import static java.util.stream.Collectors.toSet;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LOCATION;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.PREV_ROW;
 import static org.apache.accumulo.fate.util.UtilWaitThread.sleepUninterruptibly;
 
 import java.io.BufferedReader;
@@ -1243,7 +1245,7 @@
         range = new Range(startRow, lastRow);
 
       TabletsMetadata tablets = TabletsMetadata.builder().scanMetadataTable().overRange(range)
-          .fetchLocation().fetchPrev().build(context);
+          .fetch(LOCATION, PREV_ROW).build(context);
 
       KeyExtent lastExtent = null;
 
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/TabletServerBatchDeleter.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/TabletServerBatchDeleter.java
index ce8092e..9d5555f 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/TabletServerBatchDeleter.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/TabletServerBatchDeleter.java
@@ -40,7 +40,7 @@
 
   public TabletServerBatchDeleter(ClientContext context, TableId tableId,
       Authorizations authorizations, int numQueryThreads, BatchWriterConfig bwConfig) {
-    super(context, tableId, authorizations, numQueryThreads);
+    super(context, BatchDeleter.class, tableId, authorizations, numQueryThreads);
     this.context = context;
     this.tableId = tableId;
     this.bwConfig = bwConfig;
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/TabletServerBatchReader.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/TabletServerBatchReader.java
index 0a6777e..3f56668 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/TabletServerBatchReader.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/TabletServerBatchReader.java
@@ -18,11 +18,13 @@
 
 import static com.google.common.base.Preconditions.checkArgument;
 
+import java.lang.ref.Cleaner.Cleanable;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Iterator;
 import java.util.Map.Entry;
-import java.util.concurrent.ExecutorService;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.accumulo.core.client.BatchScanner;
 import org.apache.accumulo.core.data.Key;
@@ -31,32 +33,32 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.SimpleThreadPool;
+import org.apache.accumulo.core.util.cleaner.CleanerUtil;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 public class TabletServerBatchReader extends ScannerOptions implements BatchScanner {
   private static final Logger log = LoggerFactory.getLogger(TabletServerBatchReader.class);
+  private static final AtomicInteger nextBatchReaderInstance = new AtomicInteger(1);
 
-  private TableId tableId;
-  private int numThreads;
-  private ExecutorService queryThreadPool;
-
+  private final int batchReaderInstance = nextBatchReaderInstance.getAndIncrement();
+  private final TableId tableId;
+  private final int numThreads;
+  private final SimpleThreadPool queryThreadPool;
   private final ClientContext context;
-  private ArrayList<Range> ranges;
+  private final Authorizations authorizations;
+  private final AtomicBoolean closed = new AtomicBoolean(false);
+  private final Cleanable cleanable;
 
-  private Authorizations authorizations = Authorizations.EMPTY;
-  private Throwable ex = null;
-
-  private static int nextBatchReaderInstance = 1;
-
-  private static synchronized int getNextBatchReaderInstance() {
-    return nextBatchReaderInstance++;
-  }
-
-  private final int batchReaderInstance = getNextBatchReaderInstance();
+  private ArrayList<Range> ranges = null;
 
   public TabletServerBatchReader(ClientContext context, TableId tableId,
       Authorizations authorizations, int numQueryThreads) {
+    this(context, BatchScanner.class, tableId, authorizations, numQueryThreads);
+  }
+
+  protected TabletServerBatchReader(ClientContext context, Class<?> scopeClass, TableId tableId,
+      Authorizations authorizations, int numQueryThreads) {
     checkArgument(context != null, "context is null");
     checkArgument(tableId != null, "tableId is null");
     checkArgument(authorizations != null, "authorizations is null");
@@ -67,14 +69,17 @@
 
     queryThreadPool =
         new SimpleThreadPool(numQueryThreads, "batch scanner " + batchReaderInstance + "-");
-
-    ranges = null;
-    ex = new Throwable();
+    cleanable = CleanerUtil.unclosed(this, scopeClass, closed, log, queryThreadPool.asCloseable());
   }
 
   @Override
   public void close() {
-    queryThreadPool.shutdownNow();
+    if (closed.compareAndSet(false, true)) {
+      // deregister cleanable, but it won't run because it checks
+      // the value of closed first, which is now true
+      cleanable.clean();
+      queryThreadPool.shutdownNow();
+    }
   }
 
   @Override
@@ -82,29 +87,17 @@
     return authorizations;
   }
 
-  // WARNING: do not rely upon finalize to close this class. Finalize is not guaranteed to be
-  // called.
-  @Override
-  protected void finalize() {
-    if (!queryThreadPool.isShutdown()) {
-      log.warn(TabletServerBatchReader.class.getSimpleName()
-          + " not shutdown; did you forget to call close()?", ex);
-      close();
-    }
-  }
-
   @Override
   public void setRanges(Collection<Range> ranges) {
     if (ranges == null || ranges.size() == 0) {
       throw new IllegalArgumentException("ranges must be non null and contain at least 1 range");
     }
 
-    if (queryThreadPool.isShutdown()) {
+    if (closed.get()) {
       throw new IllegalStateException("batch reader closed");
     }
 
     this.ranges = new ArrayList<>(ranges);
-
   }
 
   @Override
@@ -113,7 +106,7 @@
       throw new IllegalStateException("ranges not set");
     }
 
-    if (queryThreadPool.isShutdown()) {
+    if (closed.get()) {
       throw new IllegalStateException("batch reader closed");
     }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/TabletServerBatchWriter.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/TabletServerBatchWriter.java
index 8b766c6..97fe2a4 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/TabletServerBatchWriter.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/TabletServerBatchWriter.java
@@ -100,7 +100,7 @@
  *   + when a mutation enters the system memory is incremented
  *   + when a mutation successfully leaves the system memory is decremented
  */
-public class TabletServerBatchWriter {
+public class TabletServerBatchWriter implements AutoCloseable {
 
   private static final Logger log = LoggerFactory.getLogger(TabletServerBatchWriter.class);
 
@@ -324,6 +324,7 @@
     }
   }
 
+  @Override
   public synchronized void close() throws MutationsRejectedException {
 
     if (closed)
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/ThriftTransportPool.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/ThriftTransportPool.java
index faf61ca..866ea7c 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/ThriftTransportPool.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/ThriftTransportPool.java
@@ -17,12 +17,13 @@
 package org.apache.accumulo.core.clientImpl;
 
 import java.security.SecureRandom;
+import java.util.ArrayDeque;
 import java.util.ArrayList;
 import java.util.Collections;
+import java.util.Deque;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Iterator;
-import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
@@ -49,21 +50,20 @@
   private long killTime = 1000 * 3;
 
   private static class CachedConnections {
-    LinkedList<CachedConnection> unreserved = new LinkedList<>();
+    Deque<CachedConnection> unreserved = new ArrayDeque<>(); // stack - LIFO
     Map<CachedTTransport,CachedConnection> reserved = new HashMap<>();
 
     public CachedConnection reserveAny() {
-      if (unreserved.size() > 0) {
-        CachedConnection cachedConnection = unreserved.removeFirst();
+
+      CachedConnection cachedConnection = unreserved.poll(); // safe pop
+      if (cachedConnection != null) {
         cachedConnection.reserve();
         reserved.put(cachedConnection.transport, cachedConnection);
         if (log.isTraceEnabled()) {
           log.trace("Using existing connection to {}", cachedConnection.transport.cacheKey);
         }
-        return cachedConnection;
       }
-
-      return null;
+      return cachedConnection;
     }
   }
 
@@ -122,14 +122,14 @@
 
         synchronized (pool) {
           for (CachedConnections cachedConns : pool.getCache().values()) {
-            Iterator<CachedConnection> iter = cachedConns.unreserved.iterator();
-            while (iter.hasNext()) {
-              CachedConnection cachedConnection = iter.next();
+            Deque<CachedConnection> unres = cachedConns.unreserved;
 
-              if (System.currentTimeMillis() - cachedConnection.lastReturnTime > pool.killTime) {
-                connectionsToClose.add(cachedConnection);
-                iter.remove();
-              }
+            long currTime = System.currentTimeMillis();
+
+            // The following code is structured to avoid removing from the middle of the array
+            // deqeue which would be costly. It also assumes the oldest are at the end.
+            while (!unres.isEmpty() && currTime - unres.peekLast().lastReturnTime > pool.killTime) {
+              connectionsToClose.add(unres.removeLast());
             }
 
             for (CachedConnection cachedConnection : cachedConns.reserved.values()) {
@@ -415,14 +415,8 @@
     cacheKey.precomputeHashCode();
     synchronized (this) {
       // atomically reserve location if it exist in cache
-      CachedConnections ccl = getCache().get(cacheKey);
-
-      if (ccl == null) {
-        ccl = new CachedConnections();
-        getCache().put(cacheKey, ccl);
-      }
-
-      CachedConnection cachedConnection = ccl.reserveAny();
+      CachedConnection cachedConnection =
+          getCache().computeIfAbsent(cacheKey, ck -> new CachedConnections()).reserveAny();
       if (cachedConnection != null) {
         log.trace("Using existing connection to {}", cacheKey.getServer());
         return cachedConnection.transport;
@@ -505,13 +499,8 @@
 
     try {
       synchronized (this) {
-        CachedConnections cachedConns = getCache().get(cacheKey);
-
-        if (cachedConns == null) {
-          cachedConns = new CachedConnections();
-          getCache().put(cacheKey, cachedConns);
-        }
-
+        CachedConnections cachedConns =
+            getCache().computeIfAbsent(cacheKey, ck -> new CachedConnections());
         cachedConns.reserved.put(cc.transport, cc);
       }
     } catch (TransportPoolShutdownException e) {
@@ -541,22 +530,15 @@
 
             log.trace("Returned connection had error {}", ctsc.getCacheKey());
 
-            Long ecount = errorCount.get(ctsc.getCacheKey());
-            if (ecount == null)
-              ecount = 0L;
-            ecount++;
-            errorCount.put(ctsc.getCacheKey(), ecount);
+            Long ecount = errorCount.merge(ctsc.getCacheKey(), 1L, Long::sum);
 
-            Long etime = errorTime.get(ctsc.getCacheKey());
-            if (etime == null) {
-              errorTime.put(ctsc.getCacheKey(), System.currentTimeMillis());
-            }
+            // logs the first time an error occurred
+            errorTime.computeIfAbsent(ctsc.getCacheKey(), k -> System.currentTimeMillis());
 
-            if (ecount >= ERROR_THRESHOLD && !serversWarnedAbout.contains(ctsc.getCacheKey())) {
+            if (ecount >= ERROR_THRESHOLD && serversWarnedAbout.add(ctsc.getCacheKey())) {
               log.warn(
                   "Server {} had {} failures in a short time period, will not complain anymore",
                   ctsc.getCacheKey(), ecount);
-              serversWarnedAbout.add(ctsc.getCacheKey());
             }
 
             cachedConnection.unreserve();
@@ -572,12 +554,12 @@
 
             cachedConnection.lastReturnTime = System.currentTimeMillis();
             cachedConnection.unreserve();
-            // Calling addFirst to use unreserved as LIFO queue. Using LIFO ensures that when the #
+            // Using LIFO ensures that when the #
             // of pooled connections exceeds the working set size that the
             // idle times at the end of the list grow. The connections with large idle times will be
             // cleaned up. Using a FIFO could continually reset the idle
             // times of all connections, even when there are more than the working set size.
-            cachedConns.unreserved.addFirst(cachedConnection);
+            cachedConns.unreserved.push(cachedConnection);
           }
           existInCache = true;
         }
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/bulk/ConcurrentKeyExtentCache.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/bulk/ConcurrentKeyExtentCache.java
index 6c3ed42..88d4bc1 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/bulk/ConcurrentKeyExtentCache.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/bulk/ConcurrentKeyExtentCache.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.core.clientImpl.bulk;
 
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.PREV_ROW;
+
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.HashSet;
@@ -84,7 +86,7 @@
   protected Stream<KeyExtent> lookupExtents(Text row)
       throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
     return TabletsMetadata.builder().forTable(tableId).overlapping(row, null).checkConsistency()
-        .fetchPrev().build(ctx).stream().limit(100).map(TabletMetadata::getExtent);
+        .fetch(PREV_ROW).build(ctx).stream().limit(100).map(TabletMetadata::getExtent);
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/clientImpl/mapreduce/lib/ConfiguratorBase.java b/core/src/main/java/org/apache/accumulo/core/clientImpl/mapreduce/lib/ConfiguratorBase.java
index 82ab4f9..e747951 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/mapreduce/lib/ConfiguratorBase.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/mapreduce/lib/ConfiguratorBase.java
@@ -288,7 +288,7 @@
     try (InputStream inputStream = DistributedCacheHelper.openCachedFile(tokenFile,
         cachedTokenFileName(implementingClass), conf)) {
 
-      try (Scanner fileScanner = new Scanner(inputStream, UTF_8.name())) {
+      try (Scanner fileScanner = new Scanner(inputStream, UTF_8)) {
         while (fileScanner.hasNextLine()) {
           Credentials creds = Credentials.deserialize(fileScanner.nextLine());
           if (principal.equals(creds.getPrincipal())) {
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java b/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
index 450b2f2..b3a784d 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
@@ -28,8 +28,10 @@
 import java.util.Optional;
 import java.util.OptionalInt;
 import java.util.TreeMap;
+import java.util.concurrent.atomic.AtomicReference;
 import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReentrantLock;
+import java.util.function.Function;
 import java.util.function.Predicate;
 
 import org.apache.accumulo.core.conf.PropertyType.PortRange;
@@ -39,7 +41,6 @@
 import org.slf4j.LoggerFactory;
 
 import com.google.common.base.Preconditions;
-import com.google.common.collect.ImmutableMap;
 
 /**
  * A configuration object.
@@ -170,7 +171,7 @@
           Map<String,String> propMap = new HashMap<>();
           // The reason this caching exists is to avoid repeatedly making this expensive call.
           getProperties(propMap, key -> key.startsWith(property.getKey()));
-          propMap = ImmutableMap.copyOf(propMap);
+          propMap = Map.copyOf(propMap);
 
           // So that locking is not needed when reading from enum map, always create a new one.
           // Construct and populate map using a local var so its not visible
@@ -435,6 +436,90 @@
     return null;
   }
 
+  private static class RefCount<T> {
+    T obj;
+    long count;
+
+    RefCount(long c, T r) {
+      this.count = c;
+      this.obj = r;
+    }
+  }
+
+  private class DeriverImpl<T> implements Deriver<T> {
+
+    private final AtomicReference<RefCount<T>> refref = new AtomicReference<>();
+    private final Function<AccumuloConfiguration,T> converter;
+
+    DeriverImpl(Function<AccumuloConfiguration,T> converter) {
+      this.converter = converter;
+    }
+
+    /**
+     * This method was written with the goal of avoiding thread contention and minimizing
+     * recomputation. Configuration can be accessed frequently by many threads. Ideally, threads
+     * working on unrelated task would not impeded each other because of accessing config.
+     *
+     * To avoid thread contention, synchronization and needless calls to compare and set were
+     * avoided. For example if 100 threads are all calling compare and set in a loop this could
+     * cause significant contention.
+     */
+    @Override
+    public T derive() {
+
+      // very important to obtain this before possibly recomputing object
+      long uc = getUpdateCount();
+
+      RefCount<T> rc = refref.get();
+
+      if (rc == null || rc.count != uc) {
+        T newObj = converter.apply(AccumuloConfiguration.this);
+
+        // very important to record the update count that was obtained before recomputing.
+        RefCount<T> nrc = new RefCount<>(uc, newObj);
+
+        /*
+         * The return value of compare and set is intentionally ignored here. This code could loop
+         * calling compare and set inorder to avoid returning a stale object. However after this
+         * function returns, the object could immediately become stale. So in the big picture stale
+         * objects can not be prevented. Looping here could cause thread contention, but it would
+         * not solve the overall stale object problem. That is why the return value was ignored. The
+         * following line is a least effort attempt to make the result of this recomputation
+         * available to the next caller.
+         */
+        refref.compareAndSet(rc, nrc);
+
+        return nrc.obj;
+      }
+
+      return rc.obj;
+    }
+  }
+
+  /**
+   * Automatically regenerates an object whenever configuration changes. When configuration is not
+   * changing, keeps returning the same object. Implementations should be thread safe and eventually
+   * consistent. See {@link AccumuloConfiguration#newDeriver(Function)}
+   */
+  public interface Deriver<T> {
+    T derive();
+  }
+
+  /**
+   * Enables deriving an object from configuration and automatically deriving a new object any time
+   * configuration changes.
+   *
+   * @param converter
+   *          This functions is used to create an object from configuration. A reference to this
+   *          function will be kept and called by the returned deriver.
+   * @return The returned supplier will automatically re-derive the object any time this
+   *         configuration changes. When configuration is not changing, the same object is returned.
+   *
+   */
+  public <T> Deriver<T> newDeriver(Function<AccumuloConfiguration,T> converter) {
+    return new DeriverImpl<>(converter);
+  }
+
   private static final String SCAN_EXEC_THREADS = "threads";
   private static final String SCAN_EXEC_PRIORITY = "priority";
   private static final String SCAN_EXEC_PRIORITIZER = "prioritizer";
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/ClientConfigGenerate.java b/core/src/main/java/org/apache/accumulo/core/conf/ClientConfigGenerate.java
index 55e996f..189c21b 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/ClientConfigGenerate.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/ClientConfigGenerate.java
@@ -18,9 +18,8 @@
 
 import static java.nio.charset.StandardCharsets.UTF_8;
 
-import java.io.FileNotFoundException;
+import java.io.IOException;
 import java.io.PrintStream;
-import java.io.UnsupportedEncodingException;
 import java.util.Objects;
 import java.util.Set;
 import java.util.TreeMap;
@@ -187,10 +186,9 @@
    * @throws IllegalArgumentException
    *           if args is invalid
    */
-  public static void main(String[] args)
-      throws FileNotFoundException, UnsupportedEncodingException {
+  public static void main(String[] args) throws IOException {
     if (args.length == 2) {
-      try (PrintStream stream = new PrintStream(args[1], UTF_8.name())) {
+      try (PrintStream stream = new PrintStream(args[1], UTF_8)) {
         ClientConfigGenerate clientConfigGenerate = new ClientConfigGenerate(stream);
         if (args[0].equals("--generate-markdown")) {
           clientConfigGenerate.generateMarkdown();
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java b/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java
index e606057..a0d6fa8 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java
@@ -158,8 +158,7 @@
       Class<?> requiredBaseClass) {
     try {
       ConfigurationTypeHelper.getClassInstance(null, className, requiredBaseClass);
-    } catch (ClassNotFoundException | InstantiationException | IllegalAccessException
-        | IOException e) {
+    } catch (IOException | ReflectiveOperationException e) {
       fatal(confOption + " has an invalid class name: " + className);
     } catch (ClassCastException e) {
       fatal(confOption + " must implement " + requiredBaseClass
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationDocGen.java b/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationDocGen.java
index 12420df..5ea19d9 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationDocGen.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationDocGen.java
@@ -18,9 +18,8 @@
 
 import static java.nio.charset.StandardCharsets.UTF_8;
 
-import java.io.FileNotFoundException;
+import java.io.IOException;
 import java.io.PrintStream;
-import java.io.UnsupportedEncodingException;
 import java.util.TreeMap;
 
 /**
@@ -152,10 +151,9 @@
    * @throws IllegalArgumentException
    *           if args is invalid
    */
-  public static void main(String[] args)
-      throws FileNotFoundException, UnsupportedEncodingException {
+  public static void main(String[] args) throws IOException {
     if (args.length == 2 && args[0].equals("--generate-markdown")) {
-      new ConfigurationDocGen(new PrintStream(args[1], UTF_8.name())).generate();
+      new ConfigurationDocGen(new PrintStream(args[1], UTF_8)).generate();
     } else {
       throw new IllegalArgumentException(
           "Usage: " + ConfigurationDocGen.class.getName() + " --generate-markdown <filename>");
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationTypeHelper.java b/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationTypeHelper.java
index b4686ed..4ad8b19 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationTypeHelper.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationTypeHelper.java
@@ -172,8 +172,7 @@
 
     try {
       instance = getClassInstance(context, clazzName, base);
-    } catch (RuntimeException | ClassNotFoundException | IOException | InstantiationException
-        | IllegalAccessException e) {
+    } catch (RuntimeException | IOException | ReflectiveOperationException e) {
       log.warn("Failed to load class {}", clazzName, e);
     }
 
@@ -196,7 +195,7 @@
    * @return a new instance of the class
    */
   public static <T> T getClassInstance(String context, String clazzName, Class<T> base)
-      throws ClassNotFoundException, IOException, InstantiationException, IllegalAccessException {
+      throws IOException, ReflectiveOperationException {
     T instance;
 
     Class<? extends T> clazz;
@@ -206,7 +205,7 @@
       clazz = AccumuloVFSClassLoader.loadClass(clazzName, base);
     }
 
-    instance = clazz.newInstance();
+    instance = clazz.getDeclaredConstructor().newInstance();
     if (loaded.put(clazzName, clazz) != clazz)
       log.debug("Loaded class : {}", clazzName);
 
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/CredentialProviderFactoryShim.java b/core/src/main/java/org/apache/accumulo/core/conf/CredentialProviderFactoryShim.java
deleted file mode 100644
index 36374eb..0000000
--- a/core/src/main/java/org/apache/accumulo/core/conf/CredentialProviderFactoryShim.java
+++ /dev/null
@@ -1,441 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.conf;
-
-import static java.util.Objects.requireNonNull;
-
-import java.io.IOException;
-import java.lang.reflect.InvocationTargetException;
-import java.lang.reflect.Method;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.List;
-import java.util.concurrent.ConcurrentHashMap;
-
-import org.apache.hadoop.conf.Configuration;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * Shim around Hadoop: tries to use the CredentialProviderFactory provided by hadoop-common, falling
- * back to a copy inside accumulo-core.
- * <p>
- * The CredentialProvider classes only exist in 2.6.0, so, to use them, we have to do a bunch of
- * reflection. This will also help us to continue to support [2.2.0,2.6.0) when 2.6.0 is officially
- * released.
- */
-public class CredentialProviderFactoryShim {
-  private static final Logger log = LoggerFactory.getLogger(CredentialProviderFactoryShim.class);
-
-  public static final String HADOOP_CRED_PROVIDER_FACTORY_CLASS_NAME =
-      "org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory";
-  public static final String HADOOP_CRED_PROVIDER_FACTORY_GET_PROVIDERS_METHOD_NAME =
-      "getProviders";
-
-  public static final String HADOOP_CRED_PROVIDER_CLASS_NAME =
-      "org.apache.hadoop.security.alias.CredentialProvider";
-  public static final String HADOOP_CRED_PROVIDER_GET_CREDENTIAL_ENTRY_METHOD_NAME =
-      "getCredentialEntry";
-  public static final String HADOOP_CRED_PROVIDER_GET_ALIASES_METHOD_NAME = "getAliases";
-  public static final String HADOOP_CRED_PROVIDER_CREATE_CREDENTIAL_ENTRY_METHOD_NAME =
-      "createCredentialEntry";
-  public static final String HADOOP_CRED_PROVIDER_FLUSH_METHOD_NAME = "flush";
-
-  public static final String HADOOP_CRED_ENTRY_CLASS_NAME =
-      "org.apache.hadoop.security.alias.CredentialProvider$CredentialEntry";
-  public static final String HADOOP_CRED_ENTRY_GET_CREDENTIAL_METHOD_NAME = "getCredential";
-
-  public static final String CREDENTIAL_PROVIDER_PATH = "hadoop.security.credential.provider.path";
-
-  private static Object hadoopCredProviderFactory = null;
-  private static Method getProvidersMethod = null;
-  private static Method getAliasesMethod = null;
-  private static Method getCredentialEntryMethod = null;
-  private static Method getCredentialMethod = null;
-  private static Method createCredentialEntryMethod = null;
-  private static Method flushMethod = null;
-  private static Boolean hadoopClassesAvailable = null;
-
-  // access to cachedProviders should be synchronized when necessary (for example see
-  // getCredentialProviders)
-  private static final ConcurrentHashMap<String,List<Object>> cachedProviders =
-      new ConcurrentHashMap<>();
-
-  /**
-   * Determine if we can load the necessary CredentialProvider classes. Only loaded the first time,
-   * so subsequent invocations of this method should return fast.
-   *
-   * @return True if the CredentialProvider classes/methods are available, false otherwise.
-   */
-  public static synchronized boolean isHadoopCredentialProviderAvailable() {
-    // If we already found the class
-    if (hadoopClassesAvailable != null) {
-      // Make sure everything is initialized as expected
-      // Otherwise we failed to load it
-      return hadoopClassesAvailable && getProvidersMethod != null
-          && hadoopCredProviderFactory != null && getCredentialEntryMethod != null
-          && getCredentialMethod != null;
-    }
-
-    hadoopClassesAvailable = false;
-
-    // Load Hadoop CredentialProviderFactory
-    Class<?> hadoopCredProviderFactoryClz = null;
-    try {
-      hadoopCredProviderFactoryClz = Class.forName(HADOOP_CRED_PROVIDER_FACTORY_CLASS_NAME);
-    } catch (ClassNotFoundException e) {
-      log.trace("Could not load class {}", HADOOP_CRED_PROVIDER_FACTORY_CLASS_NAME, e);
-      return false;
-    }
-
-    // Load Hadoop CredentialProviderFactory.getProviders(Configuration)
-    try {
-      getProvidersMethod = hadoopCredProviderFactoryClz
-          .getMethod(HADOOP_CRED_PROVIDER_FACTORY_GET_PROVIDERS_METHOD_NAME, Configuration.class);
-    } catch (SecurityException | NoSuchMethodException e) {
-      log.trace("Could not find {} method on {}",
-          HADOOP_CRED_PROVIDER_FACTORY_GET_PROVIDERS_METHOD_NAME,
-          HADOOP_CRED_PROVIDER_FACTORY_CLASS_NAME, e);
-      return false;
-    }
-
-    // Instantiate Hadoop CredentialProviderFactory
-    try {
-      hadoopCredProviderFactory = hadoopCredProviderFactoryClz.newInstance();
-    } catch (InstantiationException | IllegalAccessException e) {
-      log.trace("Could not instantiate class {}", HADOOP_CRED_PROVIDER_FACTORY_CLASS_NAME, e);
-      return false;
-    }
-
-    // Load Hadoop CredentialProvider
-    Class<?> hadoopCredProviderClz = null;
-    try {
-      hadoopCredProviderClz = Class.forName(HADOOP_CRED_PROVIDER_CLASS_NAME);
-    } catch (ClassNotFoundException e) {
-      log.trace("Could not load class {}", HADOOP_CRED_PROVIDER_CLASS_NAME, e);
-      return false;
-    }
-
-    // Load Hadoop CredentialProvider.getCredentialEntry(String)
-    try {
-      getCredentialEntryMethod = hadoopCredProviderClz
-          .getMethod(HADOOP_CRED_PROVIDER_GET_CREDENTIAL_ENTRY_METHOD_NAME, String.class);
-    } catch (SecurityException | NoSuchMethodException e) {
-      log.trace("Could not find {} method on {}",
-          HADOOP_CRED_PROVIDER_GET_CREDENTIAL_ENTRY_METHOD_NAME, HADOOP_CRED_PROVIDER_CLASS_NAME,
-          e);
-      return false;
-    }
-
-    // Load Hadoop CredentialProvider.getAliases()
-    try {
-      getAliasesMethod =
-          hadoopCredProviderClz.getMethod(HADOOP_CRED_PROVIDER_GET_ALIASES_METHOD_NAME);
-    } catch (SecurityException | NoSuchMethodException e) {
-      log.trace("Could not find {} method on {}", HADOOP_CRED_PROVIDER_GET_ALIASES_METHOD_NAME,
-          HADOOP_CRED_PROVIDER_CLASS_NAME, e);
-      return false;
-    }
-
-    // Load Hadoop CredentialProvider.createCredentialEntry(String, char[])
-    try {
-      createCredentialEntryMethod = hadoopCredProviderClz.getMethod(
-          HADOOP_CRED_PROVIDER_CREATE_CREDENTIAL_ENTRY_METHOD_NAME, String.class, char[].class);
-    } catch (SecurityException | NoSuchMethodException e) {
-      log.trace("Could not find {} method on {}",
-          HADOOP_CRED_PROVIDER_CREATE_CREDENTIAL_ENTRY_METHOD_NAME, HADOOP_CRED_PROVIDER_CLASS_NAME,
-          e);
-      return false;
-    }
-
-    // Load Hadoop CredentialProvider.flush()
-    try {
-      flushMethod = hadoopCredProviderClz.getMethod(HADOOP_CRED_PROVIDER_FLUSH_METHOD_NAME);
-    } catch (SecurityException | NoSuchMethodException e) {
-      log.trace("Could not find {} method on {}", HADOOP_CRED_PROVIDER_FLUSH_METHOD_NAME,
-          HADOOP_CRED_PROVIDER_CLASS_NAME, e);
-      return false;
-    }
-
-    // Load Hadoop CredentialEntry
-    Class<?> hadoopCredentialEntryClz = null;
-    try {
-      hadoopCredentialEntryClz = Class.forName(HADOOP_CRED_ENTRY_CLASS_NAME);
-    } catch (ClassNotFoundException e) {
-      log.trace("Could not load class {}", HADOOP_CRED_ENTRY_CLASS_NAME);
-      return false;
-    }
-
-    // Load Hadoop CredentialEntry.getCredential()
-    try {
-      getCredentialMethod =
-          hadoopCredentialEntryClz.getMethod(HADOOP_CRED_ENTRY_GET_CREDENTIAL_METHOD_NAME);
-    } catch (SecurityException | NoSuchMethodException e) {
-      log.trace("Could not find {} method on {}", HADOOP_CRED_ENTRY_GET_CREDENTIAL_METHOD_NAME,
-          HADOOP_CRED_ENTRY_CLASS_NAME, e);
-      return false;
-    }
-
-    hadoopClassesAvailable = true;
-
-    return true;
-  }
-
-  /**
-   * Wrapper to fetch the configured {@code List<CredentialProvider>}s.
-   *
-   * @param conf
-   *          Configuration with Property#GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS defined
-   * @return The List of CredentialProviders, or null if they could not be loaded
-   */
-  @SuppressWarnings("unchecked")
-  protected static List<Object> getCredentialProviders(Configuration conf) {
-    String path = conf.get(CREDENTIAL_PROVIDER_PATH);
-    if (path == null || path.isEmpty()) {
-      return null;
-    }
-
-    List<Object> providersList = cachedProviders.get(path);
-    if (providersList != null) {
-      return providersList;
-    }
-
-    // Call CredentialProviderFactory.getProviders(Configuration)
-    Object providersObj = null;
-    try {
-      providersObj = getProvidersMethod.invoke(hadoopCredProviderFactory, conf);
-    } catch (IllegalArgumentException | InvocationTargetException | IllegalAccessException e) {
-      log.warn("Could not invoke {}.{}", HADOOP_CRED_PROVIDER_FACTORY_CLASS_NAME,
-          HADOOP_CRED_PROVIDER_FACTORY_GET_PROVIDERS_METHOD_NAME, e);
-      return null;
-    }
-
-    // Cast the Object to List<Object> (actually List<CredentialProvider>)
-    try {
-      providersList = (List<Object>) providersObj;
-      List<Object> previousValue = cachedProviders.putIfAbsent(path, providersList);
-      if (previousValue != null) {
-        return previousValue;
-      } else {
-        return providersList;
-      }
-    } catch (ClassCastException e) {
-      log.error("Expected a List from {} method",
-          HADOOP_CRED_PROVIDER_FACTORY_GET_PROVIDERS_METHOD_NAME, e);
-      return null;
-    }
-  }
-
-  protected static char[] getFromHadoopCredentialProvider(Configuration conf, String alias) {
-    List<Object> providerObjList = getCredentialProviders(conf);
-
-    if (providerObjList == null) {
-      return null;
-    }
-
-    for (Object providerObj : providerObjList) {
-      try {
-        // Invoke CredentialProvider.getCredentialEntry(String)
-        Object credEntryObj = getCredentialEntryMethod.invoke(providerObj, alias);
-
-        if (credEntryObj == null) {
-          continue;
-        }
-
-        // Then, CredentialEntry.getCredential()
-        Object credential = getCredentialMethod.invoke(credEntryObj);
-
-        return (char[]) credential;
-      } catch (IllegalArgumentException | InvocationTargetException | IllegalAccessException e) {
-        log.warn("Failed to get credential for {} from {}", alias, providerObj, e);
-        continue;
-      }
-    }
-
-    // If we didn't find it, this isn't an error, it just wasn't set in the CredentialProvider
-    log.trace("Could not extract credential for {} from providers", alias);
-
-    return null;
-  }
-
-  @SuppressWarnings("unchecked")
-  protected static List<String> getAliasesFromHadoopCredentialProvider(Configuration conf) {
-    List<Object> providerObjList = getCredentialProviders(conf);
-
-    if (providerObjList == null) {
-      log.debug("Failed to get CredProviders");
-      return Collections.emptyList();
-    }
-
-    ArrayList<String> aliases = new ArrayList<>();
-    for (Object providerObj : providerObjList) {
-      if (providerObj != null) {
-        Object aliasesObj;
-        try {
-          aliasesObj = getAliasesMethod.invoke(providerObj);
-
-          if (aliasesObj != null && aliasesObj instanceof List) {
-            try {
-              aliases.addAll((List<String>) aliasesObj);
-            } catch (ClassCastException e) {
-              log.warn("Could not cast aliases ({}) from {} to a List<String>", aliasesObj,
-                  providerObj, e);
-              continue;
-            }
-          }
-
-        } catch (IllegalArgumentException | InvocationTargetException | IllegalAccessException e) {
-          log.warn("Failed to invoke {} on {}", HADOOP_CRED_PROVIDER_GET_ALIASES_METHOD_NAME,
-              providerObj, e);
-          continue;
-        }
-      }
-    }
-
-    return aliases;
-  }
-
-  /**
-   * Adds the Credential Provider configuration elements to the provided {@link Configuration}.
-   *
-   * @param conf
-   *          Existing Hadoop Configuration
-   * @param credentialProviders
-   *          Comma-separated list of CredentialProvider URLs
-   */
-  public static Configuration getConfiguration(Configuration conf, String credentialProviders) {
-    requireNonNull(conf);
-    requireNonNull(credentialProviders);
-    conf.set(CredentialProviderFactoryShim.CREDENTIAL_PROVIDER_PATH, credentialProviders);
-    return conf;
-  }
-
-  /**
-   * Attempt to extract the password from any configured CredentialsProviders for the given alias.
-   * If no providers or credential is found, null is returned.
-   *
-   * @param conf
-   *          Configuration for CredentialProvider
-   * @param alias
-   *          Name of CredentialEntry key
-   * @return The credential if found, null otherwise
-   */
-  public static char[] getValueFromCredentialProvider(Configuration conf, String alias) {
-    requireNonNull(conf);
-    requireNonNull(alias);
-    if (isHadoopCredentialProviderAvailable()) {
-      log.trace("Hadoop CredentialProvider is available, attempting to extract value for {}",
-          alias);
-      return getFromHadoopCredentialProvider(conf, alias);
-    }
-    return null;
-  }
-
-  /**
-   * Attempt to extract all aliases from any configured CredentialsProviders.
-   *
-   * @param conf
-   *          Configuration for the CredentialProvider
-   * @return A list of aliases. An empty list if no CredentialProviders are configured, or the
-   *         providers are empty.
-   */
-  public static List<String> getKeys(Configuration conf) {
-    requireNonNull(conf);
-
-    if (isHadoopCredentialProviderAvailable()) {
-      log.trace("Hadoop CredentialProvider is available, attempting to extract all aliases");
-      return getAliasesFromHadoopCredentialProvider(conf);
-    }
-
-    return Collections.emptyList();
-  }
-
-  /**
-   * Create a CredentialEntry using the configured Providers. If multiple CredentialProviders are
-   * configured, the first will be used.
-   *
-   * @param conf
-   *          Configuration for the CredentialProvider
-   * @param name
-   *          CredentialEntry name (alias)
-   * @param credential
-   *          The credential
-   */
-  public static void createEntry(Configuration conf, String name, char[] credential)
-      throws IOException {
-    requireNonNull(conf);
-    requireNonNull(name);
-    requireNonNull(credential);
-
-    if (!isHadoopCredentialProviderAvailable()) {
-      log.warn("Hadoop CredentialProvider is not available");
-      return;
-    }
-
-    List<Object> providers = getCredentialProviders(conf);
-    if (providers == null) {
-      throw new IOException(
-          "Could not fetch any CredentialProviders, is the implementation available?");
-    }
-
-    if (providers.size() != 1) {
-      log.warn("Found more than one CredentialProvider. Using first provider found");
-    }
-
-    Object provider = providers.get(0);
-    createEntryInProvider(provider, name, credential);
-  }
-
-  /**
-   * Create a CredentialEntry with the give name and credential in the credentialProvider. The
-   * credentialProvider argument must be an instance of Hadoop CredentialProvider.
-   *
-   * @param credentialProvider
-   *          Instance of CredentialProvider
-   * @param name
-   *          CredentialEntry name (alias)
-   * @param credential
-   *          The credential to store
-   */
-  public static void createEntryInProvider(Object credentialProvider, String name,
-      char[] credential) {
-    requireNonNull(credentialProvider);
-    requireNonNull(name);
-    requireNonNull(credential);
-
-    if (!isHadoopCredentialProviderAvailable()) {
-      log.warn("Hadoop CredentialProvider is not available");
-      return;
-    }
-
-    try {
-      createCredentialEntryMethod.invoke(credentialProvider, name, credential);
-    } catch (IllegalArgumentException e) {
-      log.warn("Failed to invoke createCredentialEntry method on CredentialProvider", e);
-      return;
-    } catch (IllegalAccessException | InvocationTargetException e) {
-      log.warn("Failed to invoke createCredentialEntry method", e);
-      return;
-    }
-
-    try {
-      flushMethod.invoke(credentialProvider);
-    } catch (IllegalArgumentException | InvocationTargetException | IllegalAccessException e) {
-      log.warn("Failed to invoke flush method on CredentialProvider", e);
-    }
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/HadoopCredentialProvider.java b/core/src/main/java/org/apache/accumulo/core/conf/HadoopCredentialProvider.java
new file mode 100644
index 0000000..58f4bfc
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/conf/HadoopCredentialProvider.java
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.conf;
+
+import static java.util.Objects.requireNonNull;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Objects;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.alias.CredentialProvider;
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Shim around Hadoop's CredentialProviderFactory provided by hadoop-common.
+ */
+public class HadoopCredentialProvider {
+  private static final Logger log = LoggerFactory.getLogger(HadoopCredentialProvider.class);
+
+  private static final String CREDENTIAL_PROVIDER_PATH = "hadoop.security.credential.provider.path";
+
+  // access to cachedProviders should be synchronized when necessary
+  private static final ConcurrentHashMap<String,List<CredentialProvider>> cachedProviders =
+      new ConcurrentHashMap<>();
+
+  /**
+   * Set the Hadoop Credential Provider path in the provided Hadoop Configuration.
+   *
+   * @param conf
+   *          the Hadoop Configuration object
+   * @param path
+   *          the credential provider paths to set
+   */
+  public static void setPath(Configuration conf, String path) {
+    conf.set(CREDENTIAL_PROVIDER_PATH, path);
+  }
+
+  /**
+   * Fetch/cache the configured providers.
+   *
+   * @return The List of CredentialProviders, or null if they could not be loaded
+   */
+  private static List<CredentialProvider> getProviders(Configuration conf) {
+    String path = conf.get(CREDENTIAL_PROVIDER_PATH);
+    if (path == null || path.isEmpty()) {
+      log.debug("Failed to get CredentialProviders; no provider path specified");
+      return null;
+    }
+    final List<CredentialProvider> providers;
+    try {
+      providers = CredentialProviderFactory.getProviders(conf);
+    } catch (IOException e) {
+      log.warn("Exception invoking CredentialProviderFactory.getProviders(conf)", e);
+      return null;
+    }
+    return cachedProviders.computeIfAbsent(path, p -> providers);
+  }
+
+  /**
+   * Attempt to extract the password from any configured CredentialProviders for the given alias. If
+   * no providers or credential is found, null is returned.
+   *
+   * @param conf
+   *          Configuration for CredentialProvider
+   * @param alias
+   *          Name of CredentialEntry key
+   * @return The credential if found, null otherwise
+   */
+  public static char[] getValue(Configuration conf, String alias) {
+    requireNonNull(alias);
+    List<CredentialProvider> providerList = getProviders(requireNonNull(conf));
+    return providerList == null ? null : providerList.stream().map(provider -> {
+      try {
+        return provider.getCredentialEntry(alias);
+      } catch (IOException e) {
+        log.warn("Failed to call getCredentialEntry(alias) for provider {}", provider, e);
+        return null;
+      }
+    }).filter(Objects::nonNull).map(entry -> entry.getCredential()).findFirst().orElseGet(() -> {
+      // If we didn't find it, this isn't an error, it just wasn't set in the CredentialProvider
+      log.trace("Could not extract credential for {} from providers", alias);
+      return null;
+    });
+  }
+
+  /**
+   * Attempt to extract all aliases from any configured CredentialProviders.
+   *
+   * @param conf
+   *          Configuration for the CredentialProvider
+   * @return A list of aliases. An empty list if no CredentialProviders are configured, or the
+   *         providers are empty.
+   */
+  public static List<String> getKeys(Configuration conf) {
+    List<CredentialProvider> providerList = getProviders(requireNonNull(conf));
+    return providerList == null ? Collections.emptyList()
+        : providerList.stream().flatMap(provider -> {
+          List<String> aliases = null;
+          try {
+            aliases = provider.getAliases();
+          } catch (IOException e) {
+            log.warn("Problem getting aliases from provider {}", provider, e);
+          }
+          return aliases == null ? Stream.empty() : aliases.stream();
+        }).collect(Collectors.toList());
+  }
+
+  /**
+   * Create a CredentialEntry using the configured Providers. If multiple CredentialProviders are
+   * configured, the first will be used.
+   *
+   * @param conf
+   *          Configuration for the CredentialProvider
+   * @param name
+   *          CredentialEntry name (alias)
+   * @param credential
+   *          The credential
+   */
+  public static void createEntry(Configuration conf, String name, char[] credential)
+      throws IOException {
+    requireNonNull(conf);
+    requireNonNull(name);
+    requireNonNull(credential);
+
+    List<CredentialProvider> providers = getProviders(conf);
+    if (providers == null || providers.isEmpty()) {
+      throw new IOException("Could not fetch any CredentialProviders");
+    }
+
+    CredentialProvider provider = providers.get(0);
+    if (providers.size() != 1) {
+      log.warn("Found more than one CredentialProvider. Using first provider found ({})", provider);
+    }
+    provider.createCredentialEntry(name, credential);
+    provider.flush();
+  }
+
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/IterConfigUtil.java b/core/src/main/java/org/apache/accumulo/core/conf/IterConfigUtil.java
index 7a2d341..c8b0004 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/IterConfigUtil.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/IterConfigUtil.java
@@ -217,7 +217,7 @@
           clazz = loadClass(iterLoad.useAccumuloClassLoader, iterLoad.context, iterInfo);
         }
 
-        SortedKeyValueIterator<Key,Value> skvi = clazz.newInstance();
+        SortedKeyValueIterator<Key,Value> skvi = clazz.getDeclaredConstructor().newInstance();
 
         Map<String,String> options = iterLoad.iterOpts.get(iterInfo.iterName);
 
@@ -227,7 +227,7 @@
         skvi.init(prev, options, iterLoad.iteratorEnvironment);
         prev = skvi;
       }
-    } catch (ClassNotFoundException | IllegalAccessException | InstantiationException e) {
+    } catch (ReflectiveOperationException e) {
       log.error(e.toString());
       throw new RuntimeException(e);
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/ObservableConfiguration.java b/core/src/main/java/org/apache/accumulo/core/conf/ObservableConfiguration.java
deleted file mode 100644
index fb49230..0000000
--- a/core/src/main/java/org/apache/accumulo/core/conf/ObservableConfiguration.java
+++ /dev/null
@@ -1,116 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.conf;
-
-import static java.util.Objects.requireNonNull;
-
-import java.util.Collection;
-import java.util.Collections;
-import java.util.Set;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * A configuration that can be observed. Handling of observers is thread-safe.
- */
-public abstract class ObservableConfiguration extends AccumuloConfiguration {
-
-  private static final Logger log = LoggerFactory.getLogger(ObservableConfiguration.class);
-
-  private Set<ConfigurationObserver> observers;
-
-  /**
-   * Creates a new observable configuration.
-   */
-  public ObservableConfiguration() {
-    observers = Collections.synchronizedSet(new java.util.HashSet<>());
-  }
-
-  /**
-   * Adds an observer.
-   *
-   * @param co
-   *          observer
-   * @throws NullPointerException
-   *           if co is null
-   */
-  public void addObserver(ConfigurationObserver co) {
-    requireNonNull(co);
-    observers.add(co);
-  }
-
-  /**
-   * Removes an observer.
-   *
-   * @param co
-   *          observer
-   */
-  public void removeObserver(ConfigurationObserver co) {
-    observers.remove(co);
-  }
-
-  /**
-   * Gets the current set of observers. The returned collection is a snapshot, and changes to it do
-   * not reflect back to the configuration.
-   *
-   * @return observers
-   */
-  public Collection<ConfigurationObserver> getObservers() {
-    return snapshot(observers);
-  }
-
-  private static Collection<ConfigurationObserver>
-      snapshot(Collection<ConfigurationObserver> observers) {
-    Collection<ConfigurationObserver> c = new java.util.ArrayList<>();
-    synchronized (observers) {
-      c.addAll(observers);
-    }
-    return c;
-  }
-
-  /**
-   * Expires all observers.
-   */
-  public void expireAllObservers() {
-    Collection<ConfigurationObserver> copy = snapshot(observers);
-    log.info("Expiring {} observers", copy.size());
-    for (ConfigurationObserver co : copy)
-      co.sessionExpired();
-  }
-
-  /**
-   * Notifies all observers that a property changed.
-   *
-   * @param key
-   *          configuration property key
-   */
-  public void propertyChanged(String key) {
-    Collection<ConfigurationObserver> copy = snapshot(observers);
-    for (ConfigurationObserver co : copy)
-      co.propertyChanged(key);
-  }
-
-  /**
-   * Notifies all observers that properties changed.
-   */
-  public void propertiesChanged() {
-    Collection<ConfigurationObserver> copy = snapshot(observers);
-    for (ConfigurationObserver co : copy)
-      co.propertiesChanged();
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/Property.java b/core/src/main/java/org/apache/accumulo/core/conf/Property.java
index 9f401b7..8243134 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/Property.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/Property.java
@@ -260,7 +260,7 @@
   MASTER_WALOG_CLOSER_IMPLEMETATION("master.walog.closer.implementation",
       "org.apache.accumulo.server.master.recovery.HadoopLogCloser", PropertyType.CLASSNAME,
       "A class that implements a mechanism to steal write access to a write-ahead log"),
-  MASTER_FATE_METRICS_ENABLED("master.fate.metrics.enabled", "false", PropertyType.BOOLEAN,
+  MASTER_FATE_METRICS_ENABLED("master.fate.metrics.enabled", "true", PropertyType.BOOLEAN,
       "Enable reporting of FATE metrics in JMX (and logging with Hadoop Metrics2"),
   MASTER_FATE_METRICS_MIN_UPDATE_INTERVAL("master.fate.metrics.min.update.interval", "60s",
       PropertyType.TIMEDURATION, "Limit calls from metric sinks to zookeeper to update interval"),
@@ -523,12 +523,15 @@
       "Do not use the Trash, even if it is configured."),
   GC_TRACE_PERCENT("gc.trace.percent", "0.01", PropertyType.FRACTION,
       "Percent of gc cycles to trace"),
+  GC_SAFEMODE("gc.safemode", "false", PropertyType.BOOLEAN,
+      "Provides listing of files to be deleted but does not delete any files"),
   GC_USE_FULL_COMPACTION("gc.post.metadata.action", "flush", PropertyType.GC_POST_ACTION,
       "When the gc runs it can make a lot of changes to the metadata, on completion, "
           + " to force the changes to be written to disk, the metadata and root tables can be flushed"
           + " and possibly compacted. Legal values are: compact - which both flushes and compacts the"
-          + " metadata; flush - which flushes only (compactions may be triggered if required); or none."
-          + " Since 2.0, the default is flush. Previously the default action was a full compaction."),
+          + " metadata; flush - which flushes only (compactions may be triggered if required); or none"),
+  GC_METRICS_ENABLED("gc.metrics.enabled", "true", PropertyType.BOOLEAN,
+      "Enable detailed gc metrics reporting with hadoop metrics."),
 
   // properties that are specific to the monitor server behavior
   MONITOR_PREFIX("monitor.", null, PropertyType.PREFIX,
@@ -1159,6 +1162,17 @@
   }
 
   /**
+   * Checks if the given property key is a valid property and is of type boolean.
+   *
+   * @param key
+   *          property key
+   * @return true if key is valid and is of type boolean, false otherwise
+   */
+  public static boolean isValidBooleanPropertyKey(String key) {
+    return validProperties.contains(key) && getPropertyByKey(key).getType() == PropertyType.BOOLEAN;
+  }
+
+  /**
    * Checks if the given property key is for a valid table property. A valid table property key is
    * either equal to the key of some defined table property (which each start with
    * {@link #TABLE_PREFIX}) or has a prefix matching {@link #TABLE_CONSTRAINT_PREFIX},
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java b/core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java
index d735622..b33f76c 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.core.conf;
 
+import static java.util.Objects.requireNonNull;
+
 import java.io.File;
 import java.net.MalformedURLException;
 import java.net.URI;
@@ -25,8 +27,11 @@
 import java.util.HashMap;
 import java.util.Map;
 import java.util.function.Predicate;
+import java.util.stream.Stream;
 
+import org.apache.commons.configuration2.AbstractConfiguration;
 import org.apache.commons.configuration2.CompositeConfiguration;
+import org.apache.commons.configuration2.MapConfiguration;
 import org.apache.commons.configuration2.PropertiesConfiguration;
 import org.apache.commons.configuration2.builder.FileBasedConfigurationBuilder;
 import org.apache.commons.configuration2.builder.fluent.Parameters;
@@ -34,8 +39,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.collect.ImmutableMap;
-
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 
 /**
@@ -56,117 +59,171 @@
 
   private static final AccumuloConfiguration parent = DefaultConfiguration.getInstance();
 
-  private final ImmutableMap<String,String> config;
-
-  public SiteConfiguration() {
-    this(getAccumuloPropsLocation());
+  public interface Buildable {
+    SiteConfiguration build();
   }
 
-  public SiteConfiguration(Map<String,String> overrides) {
-    this(getAccumuloPropsLocation(), overrides);
+  public interface OverridesOption extends Buildable {
+    Buildable withOverrides(Map<String,String> overrides);
   }
 
-  public SiteConfiguration(File accumuloPropsFile) {
-    this(accumuloPropsFile, Collections.emptyMap());
-  }
+  static class Builder implements OverridesOption, Buildable {
+    private URL url = null;
+    private Map<String,String> overrides = Collections.emptyMap();
 
-  public SiteConfiguration(File accumuloPropsFile, Map<String,String> overrides) {
-    this(toURL(accumuloPropsFile), overrides);
-  }
+    // visible to package-private for testing only
+    Builder() {}
 
-  public SiteConfiguration(URL accumuloPropsLocation) {
-    this(accumuloPropsLocation, Collections.emptyMap());
-  }
-
-  public SiteConfiguration(URL accumuloPropsLocation, Map<String,String> overrides) {
-    config = createMap(accumuloPropsLocation, overrides);
-    ConfigSanityCheck.validate(config.entrySet());
-  }
-
-  @SuppressFBWarnings(value = "URLCONNECTION_SSRF_FD",
-      justification = "location of props is specified by an admin")
-  private static ImmutableMap<String,String> createMap(URL accumuloPropsLocation,
-      Map<String,String> overrides) {
-    CompositeConfiguration config = new CompositeConfiguration();
-    if (accumuloPropsLocation != null) {
-      FileBasedConfigurationBuilder<PropertiesConfiguration> propsBuilder =
-          new FileBasedConfigurationBuilder<>(PropertiesConfiguration.class)
-              .configure(new Parameters().properties().setURL(accumuloPropsLocation));
-      try {
-        config.addConfiguration(propsBuilder.getConfiguration());
-      } catch (ConfigurationException e) {
-        throw new IllegalArgumentException(e);
-      }
+    // exists for testing only
+    OverridesOption noFile() {
+      return this;
     }
 
-    // Add all properties in config file
-    Map<String,String> result = new HashMap<>();
-    config.getKeys().forEachRemaining(key -> result.put(key, config.getString(key)));
+    // exists for testing only
+    OverridesOption fromUrl(URL propertiesFileUrl) {
+      url = requireNonNull(propertiesFileUrl);
+      return this;
+    }
 
-    // Add all overrides
-    overrides.forEach(result::put);
+    public OverridesOption fromEnv() {
+      URL siteUrl = SiteConfiguration.class.getClassLoader().getResource("accumulo-site.xml");
+      if (siteUrl != null) {
+        throw new IllegalArgumentException("Found deprecated config file 'accumulo-site.xml' on "
+            + "classpath. Since 2.0.0, this file was replaced by 'accumulo.properties'. Run the "
+            + "following command to convert an old 'accumulo-site.xml' file to the new format: "
+            + "accumulo convert-config -x /old/accumulo-site.xml -p /new/accumulo.properties");
+      }
 
-    // Add sensitive properties from credential provider (if set)
-    String credProvider = result.get(Property.GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS.getKey());
-    if (credProvider != null) {
-      org.apache.hadoop.conf.Configuration hadoopConf = new org.apache.hadoop.conf.Configuration();
-      hadoopConf.set(CredentialProviderFactoryShim.CREDENTIAL_PROVIDER_PATH, credProvider);
-      for (Property property : Property.values()) {
-        if (property.isSensitive()) {
-          char[] value = CredentialProviderFactoryShim.getValueFromCredentialProvider(hadoopConf,
-              property.getKey());
-          if (value != null) {
-            result.put(property.getKey(), new String(value));
-          }
+      String configFile = System.getProperty("accumulo.properties", "accumulo.properties");
+      if (configFile.startsWith("file://")) {
+        File f;
+        try {
+          f = new File(new URI(configFile));
+        } catch (URISyntaxException e) {
+          throw new IllegalArgumentException(
+              "Failed to load Accumulo configuration from " + configFile, e);
         }
-      }
-    }
-    return ImmutableMap.copyOf(result);
-  }
-
-  private static URL toURL(File f) {
-    try {
-      return f.toURI().toURL();
-    } catch (MalformedURLException e) {
-      throw new IllegalArgumentException(e);
-    }
-  }
-
-  public static URL getAccumuloPropsLocation() {
-
-    URL siteUrl = SiteConfiguration.class.getClassLoader().getResource("accumulo-site.xml");
-    if (siteUrl != null) {
-      throw new IllegalArgumentException("Found deprecated config file 'accumulo-site.xml' on "
-          + "classpath. Since 2.0.0, this file was replaced by 'accumulo.properties'. Run the "
-          + "following command to convert an old 'accumulo-site.xml' file to the new format: "
-          + "accumulo convert-config -x /old/accumulo-site.xml -p /new/accumulo.properties");
-    }
-
-    String configFile = System.getProperty("accumulo.properties", "accumulo.properties");
-    if (configFile.startsWith("file://")) {
-      try {
-        File f = new File(new URI(configFile));
         if (f.exists() && !f.isDirectory()) {
           log.info("Found Accumulo configuration at {}", configFile);
-          return f.toURI().toURL();
+          return fromFile(f);
         } else {
           throw new IllegalArgumentException(
               "Failed to load Accumulo configuration at " + configFile);
         }
-      } catch (MalformedURLException | URISyntaxException e) {
-        throw new IllegalArgumentException(
-            "Failed to load Accumulo configuration from " + configFile, e);
-      }
-    } else {
-      URL accumuloConfigUrl = SiteConfiguration.class.getClassLoader().getResource(configFile);
-      if (accumuloConfigUrl == null) {
-        throw new IllegalArgumentException(
-            "Failed to load Accumulo configuration '" + configFile + "' from classpath");
       } else {
-        log.info("Found Accumulo configuration on classpath at {}", accumuloConfigUrl.getFile());
-        return accumuloConfigUrl;
+        URL accumuloConfigUrl = SiteConfiguration.class.getClassLoader().getResource(configFile);
+        if (accumuloConfigUrl == null) {
+          throw new IllegalArgumentException(
+              "Failed to load Accumulo configuration '" + configFile + "' from classpath");
+        } else {
+          log.info("Found Accumulo configuration on classpath at {}", accumuloConfigUrl.getFile());
+          url = accumuloConfigUrl;
+          return this;
+        }
       }
     }
+
+    public OverridesOption fromFile(File propertiesFileLocation) {
+      try {
+        url = requireNonNull(propertiesFileLocation).toURI().toURL();
+      } catch (MalformedURLException e) {
+        throw new IllegalArgumentException(e);
+      }
+      return this;
+    }
+
+    @Override
+    public Buildable withOverrides(Map<String,String> overrides) {
+      this.overrides = requireNonNull(overrides);
+      return this;
+    }
+
+    @SuppressFBWarnings(value = "URLCONNECTION_SSRF_FD",
+        justification = "location of props is specified by an admin")
+    @Override
+    public SiteConfiguration build() {
+      // load properties from configuration file
+      var propsFileConfig = getPropsFileConfig(url);
+
+      // load properties from command-line overrides
+      var overrideConfig = new MapConfiguration(overrides);
+
+      // load credential provider property
+      var credProviderProps = new HashMap<String,String>();
+      for (var c : new AbstractConfiguration[] {propsFileConfig, overrideConfig}) {
+        var credProvider =
+            c.getString(Property.GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS.getKey());
+        if (credProvider != null && !credProvider.isEmpty()) {
+          loadCredProviderProps(credProvider, credProviderProps);
+          break;
+        }
+      }
+      var credProviderConfig = new MapConfiguration(credProviderProps);
+
+      var config = new CompositeConfiguration();
+      // add in specific order; use credential provider first, then overrides, then properties file
+      config.addConfiguration(credProviderConfig);
+      config.addConfiguration(overrideConfig);
+      config.addConfiguration(propsFileConfig);
+
+      var result = new HashMap<String,String>();
+      config.getKeys().forEachRemaining(k -> result.put(k, config.getString(k)));
+      return new SiteConfiguration(Collections.unmodifiableMap(result));
+    }
+  }
+
+  /**
+   * Build a SiteConfiguration from the environmental configuration with the option to override.
+   */
+  public static SiteConfiguration.OverridesOption fromEnv() {
+    return new SiteConfiguration.Builder().fromEnv();
+  }
+
+  /**
+   * Build a SiteConfiguration from the provided properties file with the option to override.
+   */
+  public static SiteConfiguration.OverridesOption fromFile(File propertiesFileLocation) {
+    return new SiteConfiguration.Builder().fromFile(propertiesFileLocation);
+  }
+
+  /**
+   * Build a SiteConfiguration from the environmental configuration and no overrides.
+   */
+  public static SiteConfiguration auto() {
+    return new SiteConfiguration.Builder().fromEnv().build();
+  }
+
+  private final Map<String,String> config;
+
+  private SiteConfiguration(Map<String,String> config) {
+    ConfigSanityCheck.validate(config.entrySet());
+    this.config = config;
+  }
+
+  // load properties from config file
+  private static AbstractConfiguration getPropsFileConfig(URL accumuloPropsLocation) {
+    if (accumuloPropsLocation != null) {
+      var propsBuilder = new FileBasedConfigurationBuilder<>(PropertiesConfiguration.class)
+          .configure(new Parameters().properties().setURL(accumuloPropsLocation));
+      try {
+        return propsBuilder.getConfiguration();
+      } catch (ConfigurationException e) {
+        throw new IllegalArgumentException(e);
+      }
+    }
+    return new PropertiesConfiguration();
+  }
+
+  // load sensitive properties from Hadoop credential provider
+  private static void loadCredProviderProps(String provider, Map<String,String> props) {
+    var hadoopConf = new org.apache.hadoop.conf.Configuration();
+    HadoopCredentialProvider.setPath(hadoopConf, provider);
+    Stream.of(Property.values()).filter(Property::isSensitive).forEach(p -> {
+      char[] value = HadoopCredentialProvider.getValue(hadoopConf, p.getKey());
+      if (value != null) {
+        props.put(p.getKey(), new String(value));
+      }
+    });
   }
 
   @Override
@@ -198,8 +255,9 @@
       parent.getProperties(props, filter);
     }
     config.keySet().forEach(k -> {
-      if (filter.test(k))
+      if (filter.test(k)) {
         props.put(k, config.get(k));
+      }
     });
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/constraints/Violations.java b/core/src/main/java/org/apache/accumulo/core/constraints/Violations.java
index 65c372f..ea03338 100644
--- a/core/src/main/java/org/apache/accumulo/core/constraints/Violations.java
+++ b/core/src/main/java/org/apache/accumulo/core/constraints/Violations.java
@@ -17,8 +17,10 @@
 package org.apache.accumulo.core.constraints;
 
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.List;
+import java.util.Map;
 import java.util.Map.Entry;
 import java.util.Set;
 
@@ -55,7 +57,9 @@
     }
   }
 
-  private HashMap<CVSKey,ConstraintViolationSummary> cvsmap;
+  public static final Violations EMPTY = new Violations(Collections.emptyMap());
+
+  private Map<CVSKey,ConstraintViolationSummary> cvsmap;
 
   /**
    * Creates a new empty object.
@@ -64,6 +68,10 @@
     cvsmap = new HashMap<>();
   }
 
+  private Violations(Map<CVSKey,ConstraintViolationSummary> cvsmap) {
+    this.cvsmap = cvsmap;
+  }
+
   /**
    * Checks if this object is empty, i.e., that no violations have been added.
    *
diff --git a/core/src/main/java/org/apache/accumulo/core/crypto/CryptoServiceFactory.java b/core/src/main/java/org/apache/accumulo/core/crypto/CryptoServiceFactory.java
index aefacaf..ad7c899 100644
--- a/core/src/main/java/org/apache/accumulo/core/crypto/CryptoServiceFactory.java
+++ b/core/src/main/java/org/apache/accumulo/core/crypto/CryptoServiceFactory.java
@@ -45,8 +45,8 @@
       } else {
         try {
           newCryptoService = CryptoServiceFactory.class.getClassLoader().loadClass(clazzName)
-              .asSubclass(CryptoService.class).newInstance();
-        } catch (InstantiationException | IllegalAccessException | ClassNotFoundException e) {
+              .asSubclass(CryptoService.class).getDeclaredConstructor().newInstance();
+        } catch (ReflectiveOperationException e) {
           throw new RuntimeException(e);
         }
       }
diff --git a/core/src/main/java/org/apache/accumulo/core/data/LoadPlan.java b/core/src/main/java/org/apache/accumulo/core/data/LoadPlan.java
index 3cbf63b..2ad0487 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/LoadPlan.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/LoadPlan.java
@@ -20,6 +20,7 @@
 import java.nio.file.Paths;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.List;
 
 import org.apache.accumulo.core.client.admin.TableOperations.ImportMappingOptions;
 import org.apache.hadoop.io.Text;
@@ -37,7 +38,7 @@
  * @since 2.0.0
  */
 public class LoadPlan {
-  private final ImmutableList<Destination> destinations;
+  private final List<Destination> destinations;
 
   private static byte[] copy(byte[] data) {
     return data == null ? null : Arrays.copyOf(data, data.length);
@@ -143,7 +144,7 @@
     }
   }
 
-  private LoadPlan(ImmutableList<Destination> destinations) {
+  private LoadPlan(List<Destination> destinations) {
     this.destinations = destinations;
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/file/BloomFilterLayer.java b/core/src/main/java/org/apache/accumulo/core/file/BloomFilterLayer.java
index 177c628..6e150e5 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/BloomFilterLayer.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/BloomFilterLayer.java
@@ -134,7 +134,7 @@
         else
           clazz = AccumuloVFSClassLoader.loadClass(classname, KeyFunctor.class);
 
-        transformer = clazz.newInstance();
+        transformer = clazz.getDeclaredConstructor().newInstance();
 
       } catch (Exception e) {
         LOG.error("Failed to find KeyFunctor: " + acuconf.get(Property.TABLE_BLOOM_KEY_FUNCTOR), e);
@@ -245,7 +245,7 @@
                 KeyFunctor.class);
           else
             clazz = AccumuloVFSClassLoader.loadClass(ClassName, KeyFunctor.class);
-          transformer = clazz.newInstance();
+          transformer = clazz.getDeclaredConstructor().newInstance();
 
           /**
            * read in bloom filter
@@ -266,12 +266,9 @@
         } catch (ClassNotFoundException e) {
           LOG.error("Failed to find KeyFunctor in config: " + sanitize(ClassName), e);
           bloomFilter = null;
-        } catch (InstantiationException e) {
+        } catch (ReflectiveOperationException e) {
           LOG.error("Could not instantiate KeyFunctor: " + sanitize(ClassName), e);
           bloomFilter = null;
-        } catch (IllegalAccessException e) {
-          LOG.error("Illegal acess exception", e);
-          bloomFilter = null;
         } catch (RuntimeException rte) {
           if (!closed)
             throw rte;
diff --git a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/impl/BlockCacheManagerFactory.java b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/impl/BlockCacheManagerFactory.java
index c146656..292bd20 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/impl/BlockCacheManagerFactory.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/impl/BlockCacheManagerFactory.java
@@ -44,7 +44,7 @@
     Class<? extends BlockCacheManager> clazz =
         AccumuloVFSClassLoader.loadClass(impl, BlockCacheManager.class);
     LOG.info("Created new block cache manager of type: {}", clazz.getSimpleName());
-    return clazz.newInstance();
+    return clazz.getDeclaredConstructor().newInstance();
   }
 
   /**
@@ -62,6 +62,6 @@
     Class<? extends BlockCacheManager> clazz =
         Class.forName(impl).asSubclass(BlockCacheManager.class);
     LOG.info("Created new block cache factory of type: {}", clazz.getSimpleName());
-    return clazz.newInstance();
+    return clazz.getDeclaredConstructor().newInstance();
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/lru/LruBlockCacheConfiguration.java b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/lru/LruBlockCacheConfiguration.java
index cb9ac65..b039ae3 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/lru/LruBlockCacheConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/lru/LruBlockCacheConfiguration.java
@@ -26,7 +26,6 @@
 import org.apache.accumulo.core.spi.cache.CacheType;
 
 import com.google.common.base.Preconditions;
-import com.google.common.collect.ImmutableMap;
 
 public final class LruBlockCacheConfiguration {
 
@@ -216,7 +215,7 @@
     }
 
     public Map<String,String> buildMap() {
-      return ImmutableMap.copyOf(props);
+      return Map.copyOf(props);
     }
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/PrintInfo.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/PrintInfo.java
index 2a50a16..f195701 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/PrintInfo.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/PrintInfo.java
@@ -23,7 +23,6 @@
 import java.util.Map;
 
 import org.apache.accumulo.core.cli.ConfigOpts;
-import org.apache.accumulo.core.conf.SiteConfiguration;
 import org.apache.accumulo.core.crypto.CryptoServiceFactory;
 import org.apache.accumulo.core.crypto.CryptoServiceFactory.ClassloaderType;
 import org.apache.accumulo.core.crypto.CryptoUtils;
@@ -147,7 +146,7 @@
       System.err.println("No files were given");
       System.exit(1);
     }
-    SiteConfiguration siteConfig = opts.getSiteConfiguration();
+    var siteConfig = opts.getSiteConfiguration();
 
     Configuration conf = new Configuration();
     for (String confFile : opts.configFiles) {
@@ -166,9 +165,9 @@
     for (String arg : opts.files) {
       Path path = new Path(arg);
       FileSystem fs;
-      if (arg.contains(":"))
+      if (arg.contains(":")) {
         fs = path.getFileSystem(conf);
-      else {
+      } else {
         log.warn(
             "Attempting to find file across filesystems. Consider providing URI instead of path");
         fs = hadoopFs.exists(path) ? hadoopFs : localFs; // fall back to local
@@ -183,13 +182,16 @@
       Reader iter = new RFile.Reader(cb);
       MetricsGatherer<Map<String,ArrayList<VisibilityMetric>>> vmg = new VisMetricsGatherer();
 
-      if (opts.vis || opts.hash)
+      if (opts.vis || opts.hash) {
         iter.registerMetrics(vmg);
+      }
 
       iter.printInfo(opts.printIndex);
       System.out.println();
-      org.apache.accumulo.core.file.rfile.bcfile.PrintInfo
-          .main(new String[] {"-props", opts.getPropertiesPath(), arg});
+      String propsPath = opts.getPropertiesPath();
+      String[] mainArgs =
+          propsPath == null ? new String[] {arg} : new String[] {"-props", propsPath, arg};
+      org.apache.accumulo.core.file.rfile.bcfile.PrintInfo.main(mainArgs);
 
       Map<String,ArrayList<ByteSequence>> localityGroupCF = null;
 
@@ -223,8 +225,9 @@
             Value value = dataIter.getTopValue();
             if (opts.dump) {
               System.out.println(key + " -> " + value);
-              if (System.out.checkError())
+              if (System.out.checkError()) {
                 return;
+              }
             }
             if (opts.histogram) {
               kvHistogram.add(key.getSize() + value.getSize());
@@ -262,8 +265,9 @@
         indexKeyStats.print("\t");
       }
       // If the output stream has closed, there is no reason to keep going.
-      if (System.out.checkError())
+      if (System.out.checkError()) {
         return;
+      }
     }
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Compression.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Compression.java
index 6c13240..1324b3c 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Compression.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Compression.java
@@ -48,17 +48,19 @@
  * Compression related stuff.
  */
 public final class Compression {
-  static final Logger log = LoggerFactory.getLogger(Compression.class);
+
+  private static final Logger log = LoggerFactory.getLogger(Compression.class);
 
   /**
-   * Prevent the instantiation of class.
+   * Prevent the instantiation of this class.
    */
   private Compression() {
-    // nothing
+    throw new UnsupportedOperationException();
   }
 
   static class FinishOnFlushCompressionStream extends FilterOutputStream {
-    public FinishOnFlushCompressionStream(CompressionOutputStream cout) {
+
+    FinishOnFlushCompressionStream(CompressionOutputStream cout) {
       super(cout);
     }
 
@@ -76,73 +78,97 @@
     }
   }
 
-  /** compression: zStandard */
+  /**
+   * Compression: zStandard
+   */
   public static final String COMPRESSION_ZSTD = "zstd";
-  /** snappy codec **/
+
+  /**
+   * Compression: snappy
+   **/
   public static final String COMPRESSION_SNAPPY = "snappy";
-  /** compression: gzip */
+
+  /**
+   * Compression: gzip
+   */
   public static final String COMPRESSION_GZ = "gz";
-  /** compression: lzo */
+
+  /**
+   * Compression: lzo
+   */
   public static final String COMPRESSION_LZO = "lzo";
-  /** compression: none */
+
+  /**
+   * compression: none
+   */
   public static final String COMPRESSION_NONE = "none";
 
   /**
    * Compression algorithms. There is a static initializer, below the values defined in the
-   * enumeration, that calls the initializer of all defined codecs within the Algorithm enum. This
+   * enumeration, that calls the initializer of all defined codecs within {@link Algorithm}. This
    * promotes a model of the following call graph of initialization by the static initializer,
-   * followed by calls to getCodec() and createCompressionStream/DecompressionStream. In some cases,
-   * the compression and decompression call methods will include a different buffer size for the
-   * stream. Note that if the compressed buffer size requested in these calls is zero, we will not
-   * set the buffer size for that algorithm. Instead, we will use the default within the codec.
-   *
-   * The buffer size is configured in the Codec by way of a Hadoop Configuration reference. One
-   * approach may be to use the same Configuration object, but when calls are made to
-   * createCompressionStream and DecompressionStream, with non default buffer sizes, the
-   * configuration object must be changed. In this case, concurrent calls to createCompressionStream
-   * and DecompressionStream would mutate the configuration object beneath each other, requiring
-   * synchronization to avoid undesirable activity via co-modification. To avoid synchronization
-   * entirely, we will create Codecs with their own Configuration object and cache them for re-use.
-   * A default codec will be statically created, as mentioned above to ensure we always have a codec
-   * available at loader initialization.
-   *
+   * followed by calls to {@link #getCodec()},
+   * {@link #createCompressionStream(OutputStream, Compressor, int)}, and
+   * {@link #createDecompressionStream(InputStream, Decompressor, int)}. In some cases, the
+   * compression and decompression call methods will include a different buffer size for the stream.
+   * Note that if the compressed buffer size requested in these calls is zero, we will not set the
+   * buffer size for that algorithm. Instead, we will use the default within the codec.
+   * <p>
+   * The buffer size is configured in the Codec by way of a Hadoop {@link Configuration} reference.
+   * One approach may be to use the same Configuration object, but when calls are made to
+   * {@code createCompressionStream} and {@code createDecompressionStream} with non default buffer
+   * sizes, the configuration object must be changed. In this case, concurrent calls to
+   * {@code createCompressionStream} and {@code createDecompressionStream} would mutate the
+   * configuration object beneath each other, requiring synchronization to avoid undesirable
+   * activity via co-modification. To avoid synchronization entirely, we will create Codecs with
+   * their own Configuration object and cache them for re-use. A default codec will be statically
+   * created, as mentioned above to ensure we always have a codec available at loader
+   * initialization.
+   * <p>
    * There is a Guava cache defined within Algorithm that allows us to cache Codecs for re-use.
    * Since they will have their own configuration object and thus do not need to be mutable, there
    * is no concern for using them concurrently; however, the Guava cache exists to ensure a maximal
    * size of the cache and efficient and concurrent read/write access to the cache itself.
-   *
+   * <p>
    * To provide Algorithm specific details and to describe what is in code:
-   *
+   * <p>
    * LZO will always have the default LZO codec because the buffer size is never overridden within
    * it.
-   *
+   * <p>
    * GZ will use the default GZ codec for the compression stream, but can potentially use a
    * different codec instance for the decompression stream if the requested buffer size does not
    * match the default GZ buffer size of 32k.
-   *
+   * <p>
    * Snappy will use the default Snappy codec with the default buffer size of 64k for the
    * compression stream, but will use a cached codec if the buffer size differs from the default.
    */
-  public static enum Algorithm {
+  public enum Algorithm {
 
     LZO(COMPRESSION_LZO) {
-      /**
-       * determines if we've checked the codec status. ensures we don't recreate the default codec
-       */
-      private final AtomicBoolean checked = new AtomicBoolean(false);
-      private static final String defaultClazz = "org.apache.hadoop.io.compress.LzoCodec";
-      private transient CompressionCodec codec = null;
 
       /**
-       * Configuration option for LZO buffer size
+       * The default codec class.
+       */
+      private static final String DEFAULT_CLAZZ = "org.apache.hadoop.io.compress.LzoCodec";
+
+      /**
+       * Configuration option for LZO buffer size.
        */
       private static final String BUFFER_SIZE_OPT = "io.compression.codec.lzo.buffersize";
 
       /**
-       * Default buffer size
+       * Default buffer size.
        */
       private static final int DEFAULT_BUFFER_SIZE = 64 * 1024;
 
+      /**
+       * Whether or not the codec status has been checked. Ensures the default codec is not
+       * recreated.
+       */
+      private final AtomicBoolean checked = new AtomicBoolean(false);
+
+      private transient CompressionCodec codec = null;
+
       @Override
       public boolean isSupported() {
         return codec != null;
@@ -150,29 +176,12 @@
 
       @Override
       public void initializeDefaultCodec() {
-        if (!checked.get()) {
-          checked.set(true);
-          codec = createNewCodec(DEFAULT_BUFFER_SIZE);
-        }
+        codec = initCodec(checked, DEFAULT_BUFFER_SIZE, codec);
       }
 
       @Override
       CompressionCodec createNewCodec(int bufferSize) {
-        String extClazz =
-            (conf.get(CONF_LZO_CLASS) == null ? System.getProperty(CONF_LZO_CLASS) : null);
-        String clazz = (extClazz != null) ? extClazz : defaultClazz;
-        try {
-          log.info("Trying to load Lzo codec class: {}", clazz);
-          Configuration myConf = new Configuration(conf);
-          // only use the buffersize if > 0, otherwise we'll use
-          // the default defined within the codec
-          if (bufferSize > 0)
-            myConf.setInt(BUFFER_SIZE_OPT, bufferSize);
-          return (CompressionCodec) ReflectionUtils.newInstance(Class.forName(clazz), myConf);
-        } catch (ClassNotFoundException e) {
-          // that is okay
-        }
-        return null;
+        return createNewCodec(CONF_LZO_CLASS, DEFAULT_CLAZZ, bufferSize, BUFFER_SIZE_OPT);
       }
 
       @Override
@@ -187,13 +196,8 @@
           throw new IOException("LZO codec class not specified. Did you forget to set property "
               + CONF_LZO_CLASS + "?");
         }
-        InputStream bis1 = null;
-        if (downStreamBufferSize > 0) {
-          bis1 = new BufferedInputStream(downStream, downStreamBufferSize);
-        } else {
-          bis1 = downStream;
-        }
-        CompressionInputStream cis = codec.createInputStream(bis1, decompressor);
+        InputStream bis = bufferStream(downStream, downStreamBufferSize);
+        CompressionInputStream cis = codec.createInputStream(bis, decompressor);
         return new BufferedInputStream(cis, DATA_IBUF_SIZE);
       }
 
@@ -204,14 +208,7 @@
           throw new IOException("LZO codec class not specified. Did you forget to set property "
               + CONF_LZO_CLASS + "?");
         }
-        OutputStream bos1 = null;
-        if (downStreamBufferSize > 0) {
-          bos1 = new BufferedOutputStream(downStream, downStreamBufferSize);
-        } else {
-          bos1 = downStream;
-        }
-        CompressionOutputStream cos = codec.createOutputStream(bos1, compressor);
-        return new BufferedOutputStream(new FinishOnFlushCompressionStream(cos), DATA_OBUF_SIZE);
+        return createFinishedOnFlushCompressionStream(downStream, compressor, downStreamBufferSize);
       }
 
     },
@@ -241,54 +238,28 @@
       }
 
       /**
-       * Create a new GZ codec
-       *
-       * @param bufferSize
-       *          buffer size to for GZ
-       * @return created codec
+       * Creates a new GZ codec
        */
       @Override
       protected CompressionCodec createNewCodec(final int bufferSize) {
-        DefaultCodec myCodec = new DefaultCodec();
-        Configuration myConf = new Configuration(conf);
-        // only use the buffersize if > 0, otherwise we'll use
-        // the default defined within the codec
-        if (bufferSize > 0)
-          myConf.setInt(BUFFER_SIZE_OPT, bufferSize);
-        myCodec.setConf(myConf);
-        return myCodec;
+        Configuration newConfig = new Configuration(conf);
+        updateBuffer(conf, BUFFER_SIZE_OPT, bufferSize);
+        DefaultCodec newCodec = new DefaultCodec();
+        newCodec.setConf(newConfig);
+        return newCodec;
       }
 
       @Override
       public InputStream createDecompressionStream(InputStream downStream,
           Decompressor decompressor, int downStreamBufferSize) throws IOException {
-        // Set the internal buffer size to read from down stream.
-        CompressionCodec decomCodec = codec;
-        // if we're not using the default, let's pull from the loading cache
-        if (downStreamBufferSize != DEFAULT_BUFFER_SIZE) {
-          Entry<Algorithm,Integer> sizeOpt = Maps.immutableEntry(GZ, downStreamBufferSize);
-          try {
-            decomCodec = codecCache.get(sizeOpt);
-          } catch (ExecutionException e) {
-            throw new IOException(e);
-          }
-        }
-        CompressionInputStream cis = decomCodec.createInputStream(downStream, decompressor);
-        return new BufferedInputStream(cis, DATA_IBUF_SIZE);
+        return createDecompressionStream(downStream, decompressor, downStreamBufferSize,
+            DEFAULT_BUFFER_SIZE, GZ, codec);
       }
 
       @Override
       public OutputStream createCompressionStream(OutputStream downStream, Compressor compressor,
           int downStreamBufferSize) throws IOException {
-        OutputStream bos1 = null;
-        if (downStreamBufferSize > 0) {
-          bos1 = new BufferedOutputStream(downStream, downStreamBufferSize);
-        } else {
-          bos1 = downStream;
-        }
-        // always uses the default buffer size
-        CompressionOutputStream cos = codec.createOutputStream(bos1, compressor);
-        return new BufferedOutputStream(new FinishOnFlushCompressionStream(cos), DATA_OBUF_SIZE);
+        return createFinishedOnFlushCompressionStream(downStream, compressor, downStreamBufferSize);
       }
 
       @Override
@@ -306,16 +277,11 @@
       @Override
       public InputStream createDecompressionStream(InputStream downStream,
           Decompressor decompressor, int downStreamBufferSize) {
-        if (downStreamBufferSize > 0) {
-          return new BufferedInputStream(downStream, downStreamBufferSize);
-        }
-        return downStream;
+        return bufferStream(downStream, downStreamBufferSize);
       }
 
       @Override
-      public void initializeDefaultCodec() {
-
-      }
+      public void initializeDefaultCodec() {}
 
       @Override
       protected CompressionCodec createNewCodec(final int bufferSize) {
@@ -325,11 +291,7 @@
       @Override
       public OutputStream createCompressionStream(OutputStream downStream, Compressor compressor,
           int downStreamBufferSize) {
-        if (downStreamBufferSize > 0) {
-          return new BufferedOutputStream(downStream, downStreamBufferSize);
-        }
-
-        return downStream;
+        return bufferStream(downStream, downStreamBufferSize);
       }
 
       @Override
@@ -339,85 +301,56 @@
     },
 
     SNAPPY(COMPRESSION_SNAPPY) {
-      // Use base type to avoid compile-time dependencies.
-      private transient CompressionCodec snappyCodec = null;
-      /**
-       * determines if we've checked the codec status. ensures we don't recreate the default codec
-       */
-      private final AtomicBoolean checked = new AtomicBoolean(false);
-      private static final String defaultClazz = "org.apache.hadoop.io.compress.SnappyCodec";
 
       /**
-       * Buffer size option
+       * The default codec class.
+       */
+      private static final String DEFAULT_CLAZZ = "org.apache.hadoop.io.compress.SnappyCodec";
+
+      /**
+       * Configuration option for LZO buffer size.
        */
       private static final String BUFFER_SIZE_OPT = "io.compression.codec.snappy.buffersize";
 
       /**
-       * Default buffer size value
+       * Default buffer size.
        */
       private static final int DEFAULT_BUFFER_SIZE = 64 * 1024;
 
+      /**
+       * Whether or not the codec status has been checked. Ensures the default codec is not
+       * recreated.
+       */
+      private final AtomicBoolean checked = new AtomicBoolean(false);
+
+      private transient CompressionCodec codec = null;
+
       @Override
       public CompressionCodec getCodec() {
-        return snappyCodec;
+        return codec;
       }
 
       @Override
       public void initializeDefaultCodec() {
-        if (!checked.get()) {
-          checked.set(true);
-          snappyCodec = createNewCodec(DEFAULT_BUFFER_SIZE);
-        }
+        codec = initCodec(checked, DEFAULT_BUFFER_SIZE, codec);
       }
 
       /**
        * Creates a new snappy codec.
-       *
-       * @param bufferSize
-       *          incoming buffer size
-       * @return new codec or null, depending on if installed
        */
       @Override
       protected CompressionCodec createNewCodec(final int bufferSize) {
-
-        String extClazz =
-            (conf.get(CONF_SNAPPY_CLASS) == null ? System.getProperty(CONF_SNAPPY_CLASS) : null);
-        String clazz = (extClazz != null) ? extClazz : defaultClazz;
-        try {
-          log.info("Trying to load snappy codec class: {}", clazz);
-
-          Configuration myConf = new Configuration(conf);
-          // only use the buffersize if > 0, otherwise we'll use
-          // the default defined within the codec
-          if (bufferSize > 0)
-            myConf.setInt(BUFFER_SIZE_OPT, bufferSize);
-
-          return (CompressionCodec) ReflectionUtils.newInstance(Class.forName(clazz), myConf);
-
-        } catch (ClassNotFoundException e) {
-          // that is okay
-        }
-
-        return null;
+        return createNewCodec(CONF_SNAPPY_CLASS, DEFAULT_CLAZZ, bufferSize, BUFFER_SIZE_OPT);
       }
 
       @Override
       public OutputStream createCompressionStream(OutputStream downStream, Compressor compressor,
           int downStreamBufferSize) throws IOException {
-
         if (!isSupported()) {
           throw new IOException("SNAPPY codec class not specified. Did you forget to set property "
               + CONF_SNAPPY_CLASS + "?");
         }
-        OutputStream bos1 = null;
-        if (downStreamBufferSize > 0) {
-          bos1 = new BufferedOutputStream(downStream, downStreamBufferSize);
-        } else {
-          bos1 = downStream;
-        }
-        // use the default codec
-        CompressionOutputStream cos = snappyCodec.createOutputStream(bos1, compressor);
-        return new BufferedOutputStream(new FinishOnFlushCompressionStream(cos), DATA_OBUF_SIZE);
+        return createFinishedOnFlushCompressionStream(downStream, compressor, downStreamBufferSize);
       }
 
       @Override
@@ -427,112 +360,68 @@
           throw new IOException("SNAPPY codec class not specified. Did you forget to set property "
               + CONF_SNAPPY_CLASS + "?");
         }
-
-        CompressionCodec decomCodec = snappyCodec;
-        // if we're not using the same buffer size, we'll pull the codec from the loading cache
-        if (downStreamBufferSize != DEFAULT_BUFFER_SIZE) {
-          Entry<Algorithm,Integer> sizeOpt = Maps.immutableEntry(SNAPPY, downStreamBufferSize);
-          try {
-            decomCodec = codecCache.get(sizeOpt);
-          } catch (ExecutionException e) {
-            throw new IOException(e);
-          }
-        }
-
-        CompressionInputStream cis = decomCodec.createInputStream(downStream, decompressor);
-        return new BufferedInputStream(cis, DATA_IBUF_SIZE);
+        return createDecompressionStream(downStream, decompressor, downStreamBufferSize,
+            DEFAULT_BUFFER_SIZE, SNAPPY, codec);
       }
 
       @Override
       public boolean isSupported() {
-
-        return snappyCodec != null;
+        return codec != null;
       }
     },
 
     ZSTANDARD(COMPRESSION_ZSTD) {
-      // Use base type to avoid compile-time dependencies.
-      private transient CompressionCodec zstdCodec = null;
-      /**
-       * determines if we've checked the codec status. ensures we don't recreate the default codec
-       */
-      private final AtomicBoolean checked = new AtomicBoolean(false);
-      private static final String defaultClazz = "org.apache.hadoop.io.compress.ZStandardCodec";
 
       /**
-       * Buffer size option
+       * The default codec class.
+       */
+      private static final String DEFAULT_CLAZZ = "org.apache.hadoop.io.compress.ZStandardCodec";
+
+      /**
+       * Configuration option for LZO buffer size.
        */
       private static final String BUFFER_SIZE_OPT = "io.compression.codec.zstd.buffersize";
 
       /**
-       * Default buffer size value
+       * Default buffer size.
        */
       private static final int DEFAULT_BUFFER_SIZE = 64 * 1024;
 
+      /**
+       * Whether or not the codec status has been checked. Ensures the default codec is not
+       * recreated.
+       */
+      private final AtomicBoolean checked = new AtomicBoolean(false);
+
+      private transient CompressionCodec codec = null;
+
       @Override
       public CompressionCodec getCodec() {
-        return zstdCodec;
+        return codec;
       }
 
       @Override
       public void initializeDefaultCodec() {
-        if (!checked.get()) {
-          checked.set(true);
-          zstdCodec = createNewCodec(DEFAULT_BUFFER_SIZE);
-        }
+        codec = initCodec(checked, DEFAULT_BUFFER_SIZE, codec);
       }
 
       /**
        * Creates a new ZStandard codec.
-       *
-       * @param bufferSize
-       *          incoming buffer size
-       * @return new codec or null, depending on if installed
        */
       @Override
       protected CompressionCodec createNewCodec(final int bufferSize) {
-
-        String extClazz =
-            (conf.get(CONF_ZSTD_CLASS) == null ? System.getProperty(CONF_ZSTD_CLASS) : null);
-        String clazz = (extClazz != null) ? extClazz : defaultClazz;
-        try {
-          log.info("Trying to load ZStandard codec class: {}", clazz);
-
-          Configuration myConf = new Configuration(conf);
-          // only use the buffersize if > 0, otherwise we'll use
-          // the default defined within the codec
-          if (bufferSize > 0)
-            myConf.setInt(BUFFER_SIZE_OPT, bufferSize);
-
-          return (CompressionCodec) ReflectionUtils.newInstance(Class.forName(clazz), myConf);
-
-        } catch (ClassNotFoundException e) {
-          // that is okay
-        }
-
-        return null;
+        return createNewCodec(CONF_ZSTD_CLASS, DEFAULT_CLAZZ, bufferSize, BUFFER_SIZE_OPT);
       }
 
       @Override
       public OutputStream createCompressionStream(OutputStream downStream, Compressor compressor,
           int downStreamBufferSize) throws IOException {
-
         if (!isSupported()) {
           throw new IOException(
               "ZStandard codec class not specified. Did you forget to set property "
                   + CONF_ZSTD_CLASS + "?");
         }
-        OutputStream bos1;
-        if (downStreamBufferSize > 0) {
-          bos1 = new BufferedOutputStream(downStream, downStreamBufferSize);
-        } else {
-          bos1 = downStream;
-        }
-        // use the default codec
-        CompressionOutputStream cos = zstdCodec.createOutputStream(bos1, compressor);
-        BufferedOutputStream bos2 =
-            new BufferedOutputStream(new FinishOnFlushCompressionStream(cos), DATA_OBUF_SIZE);
-        return bos2;
+        return createFinishedOnFlushCompressionStream(downStream, compressor, downStreamBufferSize);
       }
 
       @Override
@@ -543,44 +432,48 @@
               "ZStandard codec class not specified. Did you forget to set property "
                   + CONF_ZSTD_CLASS + "?");
         }
-
-        CompressionCodec decomCodec = zstdCodec;
-        // if we're not using the same buffer size, we'll pull the codec from the loading cache
-        if (downStreamBufferSize != DEFAULT_BUFFER_SIZE) {
-          Entry<Algorithm,Integer> sizeOpt = Maps.immutableEntry(ZSTANDARD, downStreamBufferSize);
-          try {
-            decomCodec = codecCache.get(sizeOpt);
-          } catch (ExecutionException e) {
-            throw new IOException(e);
-          }
-        }
-
-        CompressionInputStream cis = decomCodec.createInputStream(downStream, decompressor);
-        BufferedInputStream bis2 = new BufferedInputStream(cis, DATA_IBUF_SIZE);
-        return bis2;
+        return createDecompressionStream(downStream, decompressor, downStreamBufferSize,
+            DEFAULT_BUFFER_SIZE, ZSTANDARD, codec);
       }
 
       @Override
       public boolean isSupported() {
-        return zstdCodec != null;
+        return codec != null;
       }
     };
 
     /**
-     * The model defined by the static block, below, creates a singleton for each defined codec in
-     * the Algorithm enumeration. By creating the codecs, each call to isSupported shall return
-     * true/false depending on if the codec singleton is defined. The static initializer, below,
-     * will ensure this occurs when the Enumeration is loaded. Furthermore, calls to getCodec will
-     * return the singleton, whether it is null or not.
-     *
-     * Calls to createCompressionStream and createDecompressionStream may return a different codec
-     * than getCodec, if the incoming downStreamBufferSize is different than the default. In such a
-     * case, we will place the resulting codec into the codecCache, defined below, to ensure we have
-     * cache codecs.
-     *
-     * Since codecs are immutable, there is no concern about concurrent access to the
-     * CompressionCodec objects within the guava cache.
+     * Guava cache to have a limited factory pattern defined in the Algorithm enum.
      */
+    private static LoadingCache<Entry<Algorithm,Integer>,CompressionCodec> codecCache =
+        CacheBuilder.newBuilder().maximumSize(25).build(new CacheLoader<>() {
+          @Override
+          public CompressionCodec load(Entry<Algorithm,Integer> key) {
+            return key.getKey().createNewCodec(key.getValue());
+          }
+        });
+
+    public static final String CONF_LZO_CLASS = "io.compression.codec.lzo.class";
+    public static final String CONF_SNAPPY_CLASS = "io.compression.codec.snappy.class";
+    public static final String CONF_ZSTD_CLASS = "io.compression.codec.zstd.class";
+
+    // All compression-related settings are required to be configured statically in the
+    // Configuration object.
+    protected static final Configuration conf;
+
+    // The model defined by the static block below creates a singleton for each defined codec in the
+    // Algorithm enumeration. By creating the codecs, each call to isSupported shall return
+    // true/false depending on if the codec singleton is defined. The static initializer, below,
+    // will ensure this occurs when the Enumeration is loaded. Furthermore, calls to getCodec will
+    // return the singleton, whether it is null or not.
+    //
+    // Calls to createCompressionStream and createDecompressionStream may return a different codec
+    // than getCodec, if the incoming downStreamBufferSize is different than the default. In such a
+    // case, we will place the resulting codec into the codecCache, defined below, to ensure we have
+    // cache codecs.
+    //
+    // Since codecs are immutable, there is no concern about concurrent access to the
+    // CompressionCodec objects within the guava cache.
     static {
       conf = new Configuration();
       for (final Algorithm al : Algorithm.values()) {
@@ -588,51 +481,19 @@
       }
     }
 
-    /**
-     * Guava cache to have a limited factory pattern defined in the Algorithm enum.
-     */
-    private static LoadingCache<Entry<Algorithm,Integer>,CompressionCodec> codecCache =
-        CacheBuilder.newBuilder().maximumSize(25)
-            .build(new CacheLoader<Entry<Algorithm,Integer>,CompressionCodec>() {
-              @Override
-              public CompressionCodec load(Entry<Algorithm,Integer> key) {
-                return key.getKey().createNewCodec(key.getValue());
-              }
-            });
+    // Data input buffer size to absorb small reads from application.
+    private static final int DATA_IBUF_SIZE = 1024;
 
-    // We require that all compression related settings are configured
-    // statically in the Configuration object.
-    protected static final Configuration conf;
-    private final String compressName;
-    // data input buffer size to absorb small reads from application.
-    private static final int DATA_IBUF_SIZE = 1 * 1024;
-    // data output buffer size to absorb small writes from application.
+    // Data output buffer size to absorb small writes from application.
     private static final int DATA_OBUF_SIZE = 4 * 1024;
-    public static final String CONF_LZO_CLASS = "io.compression.codec.lzo.class";
-    public static final String CONF_SNAPPY_CLASS = "io.compression.codec.snappy.class";
-    public static final String CONF_ZSTD_CLASS = "io.compression.codec.zstd.class";
+
+    // The name of the compression algorithm.
+    private final String name;
 
     Algorithm(String name) {
-      this.compressName = name;
+      this.name = name;
     }
 
-    abstract CompressionCodec getCodec();
-
-    /**
-     * function to create the default codec object.
-     */
-    abstract void initializeDefaultCodec();
-
-    /**
-     * Shared function to create new codec objects. It is expected that if buffersize is invalid, a
-     * codec will be created with the default buffer size
-     *
-     * @param bufferSize
-     *          configured buffer size.
-     * @return new codec
-     */
-    abstract CompressionCodec createNewCodec(int bufferSize);
-
     public abstract InputStream createDecompressionStream(InputStream downStream,
         Decompressor decompressor, int downStreamBufferSize) throws IOException;
 
@@ -641,22 +502,33 @@
 
     public abstract boolean isSupported();
 
+    abstract CompressionCodec getCodec();
+
+    /**
+     * Create the default codec object.
+     */
+    abstract void initializeDefaultCodec();
+
+    /**
+     * Shared function to create new codec objects. It is expected that if buffersize is invalid, a
+     * codec will be created with the default buffer size.
+     */
+    abstract CompressionCodec createNewCodec(int bufferSize);
+
     public Compressor getCompressor() {
       CompressionCodec codec = getCodec();
       if (codec != null) {
         Compressor compressor = CodecPool.getCompressor(codec);
         if (compressor != null) {
           if (compressor.finished()) {
-            // Somebody returns the compressor to CodecPool but is still using
-            // it.
+            // Somebody returns the compressor to CodecPool but is still using it.
             log.warn("Compressor obtained from CodecPool already finished()");
           } else {
             log.debug("Got a compressor: {}", compressor.hashCode());
           }
-          /**
-           * Following statement is necessary to get around bugs in 0.18 where a compressor is
-           * referenced after returned back to the codec pool.
-           */
+          // The following statement is necessary to get around bugs in 0.18 where a compressor is
+          // referenced after it's
+          // returned back to the codec pool.
           compressor.reset();
         }
         return compressor;
@@ -664,7 +536,7 @@
       return null;
     }
 
-    public void returnCompressor(Compressor compressor) {
+    public void returnCompressor(final Compressor compressor) {
       if (compressor != null) {
         log.debug("Return a compressor: {}", compressor.hashCode());
         CodecPool.returnCompressor(compressor);
@@ -677,57 +549,158 @@
         Decompressor decompressor = CodecPool.getDecompressor(codec);
         if (decompressor != null) {
           if (decompressor.finished()) {
-            // Somebody returns the decompressor to CodecPool but is still using
-            // it.
+            // Somebody returns the decompressor to CodecPool but is still using it.
             log.warn("Decompressor obtained from CodecPool already finished()");
           } else {
             log.debug("Got a decompressor: {}", decompressor.hashCode());
           }
-          /**
-           * Following statement is necessary to get around bugs in 0.18 where a decompressor is
-           * referenced after returned back to the codec pool.
-           */
+          // The following statement is necessary to get around bugs in 0.18 where a decompressor is
+          // referenced after
+          // it's returned back to the codec pool.
           decompressor.reset();
         }
         return decompressor;
       }
-
       return null;
     }
 
-    public void returnDecompressor(Decompressor decompressor) {
+    /**
+     * Returns the specified {@link Decompressor} to the codec cache if it is not null.
+     */
+    public void returnDecompressor(final Decompressor decompressor) {
       if (decompressor != null) {
         log.debug("Returned a decompressor: {}", decompressor.hashCode());
         CodecPool.returnDecompressor(decompressor);
       }
     }
 
+    /**
+     * Returns the name of the compression algorithm.
+     *
+     * @return the name
+     */
     public String getName() {
-      return compressName;
+      return name;
     }
-  }
 
-  static Algorithm getCompressionAlgorithmByName(String compressName) {
-    Algorithm[] algos = Algorithm.class.getEnumConstants();
+    /**
+     * Initializes and returns a new codec with the specified buffer size if and only if the
+     * specified {@link AtomicBoolean} has a value of false, or returns the specified original coded
+     * otherwise.
+     */
+    CompressionCodec initCodec(final AtomicBoolean checked, final int bufferSize,
+        final CompressionCodec originalCodec) {
+      if (!checked.get()) {
+        checked.set(true);
+        return createNewCodec(bufferSize);
+      }
+      return originalCodec;
+    }
 
-    for (Algorithm a : algos) {
-      if (a.getName().equals(compressName)) {
-        return a;
+    /**
+     * Returns a new {@link CompressionCodec} of the specified type, or the default type if no
+     * primary type is specified. If the specified buffer size is greater than 0, the specified
+     * buffer size configuration option will be updated in the codec's configuration with the buffer
+     * size. If the neither the specified codec type or the default codec type can be found, null
+     * will be returned.
+     */
+    CompressionCodec createNewCodec(final String codecClazzProp, final String defaultClazz,
+        final int bufferSize, final String bufferSizeConfigOpt) {
+      String extClazz =
+          (conf.get(codecClazzProp) == null ? System.getProperty(codecClazzProp) : null);
+      String clazz = (extClazz != null) ? extClazz : defaultClazz;
+      try {
+        log.info("Trying to load codec class {} for {}", clazz, codecClazzProp);
+        Configuration config = new Configuration(conf);
+        updateBuffer(config, bufferSizeConfigOpt, bufferSize);
+        return (CompressionCodec) ReflectionUtils.newInstance(Class.forName(clazz), config);
+      } catch (ClassNotFoundException e) {
+        // This is okay.
+      }
+      return null;
+    }
+
+    InputStream createDecompressionStream(final InputStream stream, final Decompressor decompressor,
+        final int bufferSize, final int defaultBufferSize, final Algorithm algorithm,
+        CompressionCodec codec) throws IOException {
+      // If the default buffer size is not being used, pull from the loading cache.
+      if (bufferSize != defaultBufferSize) {
+        Entry<Algorithm,Integer> sizeOpt = Maps.immutableEntry(algorithm, bufferSize);
+        try {
+          codec = codecCache.get(sizeOpt);
+        } catch (ExecutionException e) {
+          throw new IOException(e);
+        }
+      }
+      CompressionInputStream cis = codec.createInputStream(stream, decompressor);
+      return new BufferedInputStream(cis, DATA_IBUF_SIZE);
+    }
+
+    /**
+     * Returns a new {@link FinishOnFlushCompressionStream} initialized for the specified output
+     * stream and compressor.
+     */
+    OutputStream createFinishedOnFlushCompressionStream(final OutputStream downStream,
+        final Compressor compressor, final int downStreamBufferSize) throws IOException {
+      OutputStream out = bufferStream(downStream, downStreamBufferSize);
+      CompressionOutputStream cos = getCodec().createOutputStream(out, compressor);
+      return new BufferedOutputStream(new FinishOnFlushCompressionStream(cos), DATA_OBUF_SIZE);
+    }
+
+    /**
+     * Return the given stream wrapped as a {@link BufferedOutputStream} with the given buffer size
+     * if the buffer size is greater than 0, or return the original stream otherwise.
+     */
+    OutputStream bufferStream(final OutputStream stream, final int bufferSize) {
+      if (bufferSize > 0) {
+        return new BufferedOutputStream(stream, bufferSize);
+      }
+      return stream;
+    }
+
+    /**
+     * Return the given stream wrapped as a {@link BufferedInputStream} with the given buffer size
+     * if the buffer size is greater than 0, or return the original stream otherwise.
+     */
+    InputStream bufferStream(final InputStream stream, final int bufferSize) {
+      if (bufferSize > 0) {
+        return new BufferedInputStream(stream, bufferSize);
+      }
+      return stream;
+    }
+
+    /**
+     * Updates the value of the specified buffer size opt in the given {@link Configuration} if the
+     * new buffer size is greater than 0.
+     */
+    void updateBuffer(final Configuration config, final String bufferSizeOpt,
+        final int bufferSize) {
+      // Use the buffersize only if it is greater than 0, otherwise use the default defined within
+      // the codec.
+      if (bufferSize > 0) {
+        config.setInt(bufferSizeOpt, bufferSize);
       }
     }
-
-    throw new IllegalArgumentException("Unsupported compression algorithm name: " + compressName);
   }
 
   public static String[] getSupportedAlgorithms() {
-    Algorithm[] algos = Algorithm.class.getEnumConstants();
-
-    ArrayList<String> ret = new ArrayList<>();
-    for (Algorithm a : algos) {
-      if (a.isSupported()) {
-        ret.add(a.getName());
+    Algorithm[] algorithms = Algorithm.class.getEnumConstants();
+    ArrayList<String> supportedAlgorithms = new ArrayList<>();
+    for (Algorithm algorithm : algorithms) {
+      if (algorithm.isSupported()) {
+        supportedAlgorithms.add(algorithm.getName());
       }
     }
-    return ret.toArray(new String[ret.size()]);
+    return supportedAlgorithms.toArray(new String[0]);
+  }
+
+  static Algorithm getCompressionAlgorithmByName(final String name) {
+    Algorithm[] algorithms = Algorithm.class.getEnumConstants();
+    for (Algorithm algorithm : algorithms) {
+      if (algorithm.getName().equals(name)) {
+        return algorithm;
+      }
+    }
+    throw new IllegalArgumentException("Unsupported compression algorithm name: " + name);
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/PrintInfo.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/PrintInfo.java
index 820ae2a..9263e27 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/PrintInfo.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/PrintInfo.java
@@ -76,16 +76,17 @@
       System.err.println("No files were given");
       System.exit(-1);
     }
-    SiteConfiguration siteConfig = opts.getSiteConfiguration();
+    var siteConfig = opts.getSiteConfiguration();
     Configuration conf = new Configuration();
     FileSystem hadoopFs = FileSystem.get(conf);
     FileSystem localFs = FileSystem.getLocal(conf);
     Path path = new Path(opts.file);
     FileSystem fs;
-    if (opts.file.contains(":"))
+    if (opts.file.contains(":")) {
       fs = path.getFileSystem(conf);
-    else
+    } else {
       fs = hadoopFs.exists(path) ? hadoopFs : localFs; // fall back to local
+    }
     printMetaBlockInfo(siteConfig, conf, fs, path);
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/Combiner.java b/core/src/main/java/org/apache/accumulo/core/iterators/Combiner.java
index 4babca8..82c86d2 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/Combiner.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/Combiner.java
@@ -310,7 +310,7 @@
     // TODO test
     Combiner newInstance;
     try {
-      newInstance = this.getClass().newInstance();
+      newInstance = this.getClass().getDeclaredConstructor().newInstance();
     } catch (Exception e) {
       throw new RuntimeException(e);
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/Filter.java b/core/src/main/java/org/apache/accumulo/core/iterators/Filter.java
index c67252f..15f8f76 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/Filter.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/Filter.java
@@ -42,7 +42,7 @@
   public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
     Filter newInstance;
     try {
-      newInstance = this.getClass().newInstance();
+      newInstance = this.getClass().getDeclaredConstructor().newInstance();
     } catch (Exception e) {
       throw new RuntimeException(e);
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/OrIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/OrIterator.java
index c4f2c03..b18389c 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/OrIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/OrIterator.java
@@ -33,7 +33,6 @@
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
-import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -305,7 +304,7 @@
       throw new IllegalArgumentException(
           COLUMNS_KEY + " was not provided in the iterator configuration");
     }
-    String[] columns = StringUtils.split(columnsValue, ',');
+    String[] columns = columnsValue.split(",");
     setTerms(source, Arrays.asList(columns), env);
     LOG.trace("Set sources: {}", this.sources);
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/TypedValueCombiner.java b/core/src/main/java/org/apache/accumulo/core/iterators/TypedValueCombiner.java
index 0610ed0..2626064 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/TypedValueCombiner.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/TypedValueCombiner.java
@@ -128,8 +128,8 @@
       @SuppressWarnings("unchecked")
       Class<? extends Encoder<V>> clazz = (Class<? extends Encoder<V>>) AccumuloVFSClassLoader
           .loadClass(encoderClass, Encoder.class);
-      encoder = clazz.newInstance();
-    } catch (ClassNotFoundException | IllegalAccessException | InstantiationException e) {
+      encoder = clazz.getDeclaredConstructor().newInstance();
+    } catch (ReflectiveOperationException e) {
       throw new IllegalArgumentException(e);
     }
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/conf/ColumnToClassMapping.java b/core/src/main/java/org/apache/accumulo/core/iterators/conf/ColumnToClassMapping.java
index 979d507..2d80fdd 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/conf/ColumnToClassMapping.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/conf/ColumnToClassMapping.java
@@ -42,13 +42,12 @@
   }
 
   public ColumnToClassMapping(Map<String,String> objectStrings, Class<? extends K> c)
-      throws InstantiationException, IllegalAccessException, ClassNotFoundException, IOException {
+      throws ReflectiveOperationException, IOException {
     this(objectStrings, c, null);
   }
 
   public ColumnToClassMapping(Map<String,String> objectStrings, Class<? extends K> c,
-      String context)
-      throws InstantiationException, IllegalAccessException, ClassNotFoundException, IOException {
+      String context) throws ReflectiveOperationException, IOException {
     this();
 
     for (Entry<String,String> entry : objectStrings.entrySet()) {
@@ -65,7 +64,7 @@
         clazz = AccumuloVFSClassLoader.loadClass(className, c);
 
       @SuppressWarnings("unchecked")
-      K inst = (K) clazz.newInstance();
+      K inst = (K) clazz.getDeclaredConstructor().newInstance();
       if (pcic.getSecond() == null) {
         addObject(pcic.getFirst(), inst);
       } else {
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/system/LocalityGroupIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/system/LocalityGroupIterator.java
index 5f724ce..c6f9581 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/system/LocalityGroupIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/system/LocalityGroupIterator.java
@@ -36,8 +36,6 @@
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.commons.lang3.mutable.MutableLong;
 
-import com.google.common.collect.ImmutableSet;
-
 public class LocalityGroupIterator extends HeapIterator implements InterruptibleIterator {
 
   private static final Collection<ByteSequence> EMPTY_CF_SET = Collections.emptySet();
@@ -104,11 +102,11 @@
    * This will cache the arguments used in the seek call along with the locality groups seeked.
    */
   public static class LocalityGroupSeekCache {
-    private ImmutableSet<ByteSequence> lastColumnFamilies;
+    private Set<ByteSequence> lastColumnFamilies;
     private volatile boolean lastInclusive;
     private Collection<LocalityGroup> lastUsed;
 
-    public ImmutableSet<ByteSequence> getLastColumnFamilies() {
+    public Set<ByteSequence> getLastColumnFamilies() {
       return lastColumnFamilies;
     }
 
@@ -164,15 +162,16 @@
     hiter.clear();
 
     Set<ByteSequence> cfSet;
-    if (columnFamilies.size() > 0)
+    if (columnFamilies.size() > 0) {
       if (columnFamilies instanceof Set<?>) {
         cfSet = (Set<ByteSequence>) columnFamilies;
       } else {
         cfSet = new HashSet<>();
         cfSet.addAll(columnFamilies);
       }
-    else
+    } else {
       cfSet = Collections.emptySet();
+    }
 
     // determine the set of groups to use
     Collection<LocalityGroup> groups = Collections.emptyList();
@@ -258,17 +257,18 @@
   public static LocalityGroupSeekCache seek(HeapIterator hiter, LocalityGroupContext lgContext,
       Range range, Collection<ByteSequence> columnFamilies, boolean inclusive,
       LocalityGroupSeekCache lgSeekCache) throws IOException {
-    if (lgSeekCache == null)
+    if (lgSeekCache == null) {
       lgSeekCache = new LocalityGroupSeekCache();
+    }
 
     // determine if the arguments have changed since the last time
     boolean sameArgs = false;
-    ImmutableSet<ByteSequence> cfSet = null;
+    Set<ByteSequence> cfSet = null;
     if (lgSeekCache.lastUsed != null && inclusive == lgSeekCache.lastInclusive) {
       if (columnFamilies instanceof Set) {
         sameArgs = lgSeekCache.lastColumnFamilies.equals(columnFamilies);
       } else {
-        cfSet = ImmutableSet.copyOf(columnFamilies);
+        cfSet = Set.copyOf(columnFamilies);
         sameArgs = lgSeekCache.lastColumnFamilies.equals(cfSet);
       }
     }
@@ -283,8 +283,7 @@
       }
     } else { // otherwise capture the parameters, and use the static seek method to locate the
              // locality groups to use.
-      lgSeekCache.lastColumnFamilies =
-          (cfSet == null ? ImmutableSet.copyOf(columnFamilies) : cfSet);
+      lgSeekCache.lastColumnFamilies = (cfSet == null ? Set.copyOf(columnFamilies) : cfSet);
       lgSeekCache.lastInclusive = inclusive;
       lgSeekCache.lastUsed = _seek(hiter, lgContext, range, columnFamilies, inclusive);
     }
@@ -304,8 +303,9 @@
 
     for (int i = 0; i < lgContext.groups.size(); i++) {
       groupsCopy[i] = new LocalityGroup(lgContext.groups.get(i), env);
-      if (interruptFlag != null)
+      if (interruptFlag != null) {
         groupsCopy[i].getIterator().setInterruptFlag(interruptFlag);
+      }
     }
 
     return new LocalityGroupIterator(groupsCopy);
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/RegExFilter.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/RegExFilter.java
index 2bdcdf3..7b4b2c4 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/RegExFilter.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/RegExFilter.java
@@ -59,6 +59,8 @@
 
   public static final String ENCODING_DEFAULT = UTF_8.name();
 
+  private Charset encoding = UTF_8;
+
   private Matcher rowMatcher;
   private Matcher colfMatcher;
   private Matcher colqMatcher;
@@ -66,8 +68,6 @@
   private boolean orFields = false;
   private boolean matchSubstring = false;
 
-  private Charset encoding = Charset.forName(ENCODING_DEFAULT);
-
   private Matcher copyMatcher(Matcher m) {
     if (m == null)
       return m;
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/RowEncodingIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/RowEncodingIterator.java
index c693ba8..a9d902e 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/RowEncodingIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/RowEncodingIterator.java
@@ -87,7 +87,7 @@
   public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
     RowEncodingIterator newInstance;
     try {
-      newInstance = this.getClass().newInstance();
+      newInstance = this.getClass().getDeclaredConstructor().newInstance();
     } catch (Exception e) {
       throw new RuntimeException(e);
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/RowFilter.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/RowFilter.java
index 67a20db..f1d5e1e 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/RowFilter.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/RowFilter.java
@@ -151,7 +151,7 @@
   public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
     RowFilter newInstance;
     try {
-      newInstance = getClass().newInstance();
+      newInstance = getClass().getDeclaredConstructor().newInstance();
     } catch (Exception e) {
       throw new RuntimeException(e);
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/SeekingFilter.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/SeekingFilter.java
index 00c27a4..fc31aaf 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/SeekingFilter.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/SeekingFilter.java
@@ -153,7 +153,7 @@
   public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
     SeekingFilter newInstance;
     try {
-      newInstance = this.getClass().newInstance();
+      newInstance = this.getClass().getDeclaredConstructor().newInstance();
     } catch (Exception e) {
       throw new RuntimeException(e);
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/TransformingIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/TransformingIterator.java
index bc5ef88..e3eeb94 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/TransformingIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/TransformingIterator.java
@@ -175,7 +175,7 @@
     TransformingIterator copy;
 
     try {
-      copy = getClass().newInstance();
+      copy = getClass().getDeclaredConstructor().newInstance();
     } catch (Exception e) {
       throw new RuntimeException(e);
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/MetadataTable.java b/core/src/main/java/org/apache/accumulo/core/metadata/MetadataTable.java
index b5208e3..2bc9906 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/MetadataTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/MetadataTable.java
@@ -25,5 +25,4 @@
 
   public static final TableId ID = TableId.of("!0");
   public static final String NAME = Namespace.ACCUMULO.name() + ".metadata";
-
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/RootTable.java b/core/src/main/java/org/apache/accumulo/core/metadata/RootTable.java
index 1d3b330..d943f17 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/RootTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/RootTable.java
@@ -33,15 +33,14 @@
   public static final String ROOT_TABLET_LOCATION = "/root_tablet";
 
   /**
-   * ZK path relative to the instance directory for information about the root tablet
+   * ZK path relative to the zookeeper node where the root tablet metadata is stored.
    */
   public static final String ZROOT_TABLET = ROOT_TABLET_LOCATION;
-  public static final String ZROOT_TABLET_LOCATION = ZROOT_TABLET + "/location";
-  public static final String ZROOT_TABLET_FUTURE_LOCATION = ZROOT_TABLET + "/future_location";
-  public static final String ZROOT_TABLET_LAST_LOCATION = ZROOT_TABLET + "/lastlocation";
-  public static final String ZROOT_TABLET_WALOGS = ZROOT_TABLET + "/walogs";
-  public static final String ZROOT_TABLET_CURRENT_LOGS = ZROOT_TABLET + "/current_logs";
-  public static final String ZROOT_TABLET_PATH = ZROOT_TABLET + "/dir";
+
+  /**
+   * ZK path relative to the zookeeper node where the root tablet gc candidates are stored.
+   */
+  public static final String ZROOT_TABLET_GC_CANDIDATES = ZROOT_TABLET + "/gc_candidates";
 
   public static final KeyExtent EXTENT = new KeyExtent(ID, null, null);
   public static final KeyExtent OLD_EXTENT =
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/schema/Ample.java b/core/src/main/java/org/apache/accumulo/core/metadata/schema/Ample.java
new file mode 100644
index 0000000..61fbf06
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/schema/Ample.java
@@ -0,0 +1,226 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.metadata.schema;
+
+import java.util.Collection;
+import java.util.Iterator;
+
+import org.apache.accumulo.core.data.TableId;
+import org.apache.accumulo.core.dataImpl.KeyExtent;
+import org.apache.accumulo.core.metadata.MetadataTable;
+import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.LocationType;
+import org.apache.accumulo.core.tabletserver.log.LogEntry;
+import org.apache.accumulo.core.util.HostAndPort;
+import org.apache.accumulo.fate.zookeeper.ZooLock;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Text;
+
+/**
+ * Accumulo Metadata Persistence Layer. Entry point and abstractions layer for reading and updating
+ * persisted Accumulo metadata. This metadata may be stored in Zookeeper or in Accumulo system
+ * tables.
+ *
+ * <p>
+ * This interface seeks to satisfy the following goals.
+ *
+ * <UL>
+ * <LI>Provide a single entry point for all reading and writing of Accumulo Metadata.
+ * <LI>The root tablet persists its data in Zookeeper. Metadata tablets persist their data in root
+ * tablet. All other tablets persist their data in the metadata table. This interface abstracts how
+ * and where information for a tablet is actually persisted.
+ * <LI>Before the creation of this interface, many concurrent metadata table updates resulted in
+ * separate synchronous RPCs. The design of this interface allows batching of metadata table updates
+ * within a tablet server for cluster wide efficiencies. Batching is not required by
+ * implementations, but the design of the interface makes it possible.
+ * <LI>Make code that updates Accumulo persistent metadata more concise. Before this interface
+ * existed, there was a lot of redundant and verbose code for updating metadata.
+ * <LI>Reduce specialized code for the root tablet. Currently there is specialized code to manage
+ * the root tablets files that is different from all other tablets. This interface is the beginning
+ * of an effort to remove this specialized code. See #936
+ * </UL>
+ */
+public interface Ample {
+
+  /**
+   * Accumulo is a distributed tree with three levels. This enum is used to communicate to Ample
+   * that code is interested in operating on the metadata of a data level. Sometimes tables ids or
+   * key extents are passed to Ample in lieu of a data level, in these cases the data level is
+   * derived from the table id.
+   */
+  public enum DataLevel {
+    ROOT(null, null),
+    METADATA(RootTable.NAME, RootTable.ID),
+    USER(MetadataTable.NAME, MetadataTable.ID);
+
+    private final String table;
+    private final TableId id;
+
+    private DataLevel(String table, TableId id) {
+      this.table = table;
+      this.id = id;
+    }
+
+    /**
+     * @return The name of the Accumulo table in which this data level stores its metadata.
+     */
+    public String metaTable() {
+      if (table == null)
+        throw new UnsupportedOperationException();
+      return table;
+    }
+
+    /**
+     * @return The Id of the Accumulo table in which this data level stores its metadata.
+     */
+    public TableId tableId() {
+      if (id == null)
+        throw new UnsupportedOperationException();
+      return id;
+    }
+  }
+
+  /**
+   * Read a single tablets metadata. No checking is done for prev row, so it could differ.
+   *
+   * @param extent
+   *          Reads tablet metadata using the table id and end row from this extent.
+   * @param colsToFetch
+   *          What tablets columns to fetch. If empty, then everything is fetched.
+   */
+  TabletMetadata readTablet(KeyExtent extent, ColumnType... colsToFetch);
+
+  /**
+   * Initiates mutating a single tablets persistent metadata. No data is persisted until the
+   * {@code mutate()} method is called on the returned object. If updating multiple tablets,
+   * consider using {@link #mutateTablets()}
+   *
+   * @param extent
+   *          Mutates a tablet that has this table id and end row. The prev end row is not
+   *          considered or checked.
+   */
+  default TabletMutator mutateTablet(KeyExtent extent) {
+    throw new UnsupportedOperationException();
+  }
+
+  /**
+   * Use this when updating multiple tablets. Ensure the returns TabletsMutator is closed, or data
+   * may not be persisted.
+   */
+  default TabletsMutator mutateTablets() {
+    throw new UnsupportedOperationException();
+  }
+
+  default void putGcCandidates(TableId tableId, Collection<? extends Ample.FileMeta> candidates) {
+    throw new UnsupportedOperationException();
+  }
+
+  default void deleteGcCandidates(DataLevel level, Collection<String> paths) {
+    throw new UnsupportedOperationException();
+  }
+
+  default Iterator<String> getGcCandidates(DataLevel level, String continuePoint) {
+    throw new UnsupportedOperationException();
+  }
+
+  /**
+   * This interface allows efficiently updating multiple tablets. Unless close is called, changes
+   * may not be persisted.
+   */
+  public interface TabletsMutator extends AutoCloseable {
+    TabletMutator mutateTablet(KeyExtent extent);
+
+    @Override
+    void close();
+  }
+
+  /**
+   * Temporary interface, place holder for some server side types like TServerInstance. Need to
+   * simplify and possibly combine these type.
+   */
+  interface TServer {
+    HostAndPort getLocation();
+
+    String getSession();
+  }
+
+  /**
+   * Temporary interface, place holder for the server side type FileRef. Need to simplify this type.
+   */
+  interface FileMeta {
+    public Text meta();
+
+    public Path path();
+  }
+
+  /**
+   * Interface for changing a tablets persistent data.
+   */
+  interface TabletMutator {
+    public TabletMutator putPrevEndRow(Text per);
+
+    public TabletMutator putFile(FileMeta path, DataFileValue dfv);
+
+    public TabletMutator deleteFile(FileMeta path);
+
+    public TabletMutator putScan(FileMeta path);
+
+    public TabletMutator deleteScan(FileMeta path);
+
+    public TabletMutator putCompactionId(long compactionId);
+
+    public TabletMutator putFlushId(long flushId);
+
+    public TabletMutator putLocation(TServer tserver, LocationType type);
+
+    public TabletMutator deleteLocation(TServer tserver, LocationType type);
+
+    public TabletMutator putZooLock(ZooLock zooLock);
+
+    public TabletMutator putDir(String dir);
+
+    public TabletMutator putWal(LogEntry logEntry);
+
+    public TabletMutator deleteWal(String wal);
+
+    public TabletMutator deleteWal(LogEntry logEntry);
+
+    public TabletMutator putTime(MetadataTime time);
+
+    public TabletMutator putBulkFile(Ample.FileMeta bulkref, long tid);
+
+    public TabletMutator deleteBulkFile(Ample.FileMeta bulkref);
+
+    public TabletMutator putChopped();
+
+    /**
+     * This method persist (or queues for persisting) previous put and deletes against this object.
+     * Unless this method is called, previous calls will never be persisted. The purpose of this
+     * method is to prevent partial changes in the case of an exception.
+     *
+     * <p>
+     * Implementors of this interface should ensure either all requested changes are persisted or
+     * none.
+     *
+     * <p>
+     * After this method is called, calling any method on this object will result in an exception.
+     */
+    public void mutate();
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/schema/AmpleImpl.java b/core/src/main/java/org/apache/accumulo/core/metadata/schema/AmpleImpl.java
new file mode 100644
index 0000000..c7411b7
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/schema/AmpleImpl.java
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.metadata.schema;
+
+import org.apache.accumulo.core.client.AccumuloClient;
+import org.apache.accumulo.core.dataImpl.KeyExtent;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType;
+import org.apache.accumulo.core.metadata.schema.TabletsMetadata.Options;
+
+import com.google.common.collect.Iterables;
+
+public class AmpleImpl implements Ample {
+  private final AccumuloClient client;
+
+  public AmpleImpl(AccumuloClient client) {
+    this.client = client;
+  }
+
+  @Override
+  public TabletMetadata readTablet(KeyExtent extent, ColumnType... colsToFetch) {
+    Options builder = TabletsMetadata.builder().forTablet(extent);
+    if (colsToFetch.length > 0)
+      builder.fetch(colsToFetch);
+
+    try (TabletsMetadata tablets = builder.build(client)) {
+      return Iterables.getOnlyElement(tablets);
+    }
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java b/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
index 8da43a4..1a88785 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
@@ -25,8 +25,10 @@
 import org.apache.accumulo.core.data.PartialKey;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.TableId;
+import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.schema.Section;
 import org.apache.accumulo.core.util.ColumnFQ;
+import org.apache.accumulo.fate.FateTxId;
 import org.apache.hadoop.io.Text;
 
 /**
@@ -86,11 +88,15 @@
       /**
        * A temporary field in case a split fails and we need to roll back
        */
-      public static final ColumnFQ OLD_PREV_ROW_COLUMN = new ColumnFQ(NAME, new Text("oldprevrow"));
+      public static final String OLD_PREV_ROW_QUAL = "oldprevrow";
+      public static final ColumnFQ OLD_PREV_ROW_COLUMN =
+          new ColumnFQ(NAME, new Text(OLD_PREV_ROW_QUAL));
       /**
        * A temporary field for splits to optimize certain operations
        */
-      public static final ColumnFQ SPLIT_RATIO_COLUMN = new ColumnFQ(NAME, new Text("splitRatio"));
+      public static final String SPLIT_RATIO_QUAL = "splitRatio";
+      public static final ColumnFQ SPLIT_RATIO_COLUMN =
+          new ColumnFQ(NAME, new Text(SPLIT_RATIO_QUAL));
     }
 
     /**
@@ -167,6 +173,20 @@
     public static class BulkFileColumnFamily {
       public static final String STR_NAME = "loaded";
       public static final Text NAME = new Text(STR_NAME);
+
+      public static long getBulkLoadTid(Value v) {
+        return getBulkLoadTid(v.toString());
+      }
+
+      public static long getBulkLoadTid(String vs) {
+        if (FateTxId.isFormatedTid(vs)) {
+          return FateTxId.fromString(vs);
+        } else {
+          // a new serialization format was introduce in 2.0. This code support deserializing the
+          // old format.
+          return Long.parseLong(vs);
+        }
+      }
     }
 
     /**
@@ -235,12 +255,27 @@
     private static final Section section =
         new Section(RESERVED_PREFIX + "del", true, RESERVED_PREFIX + "dem", false);
 
+    private static final int encoded_prefix_length =
+        section.getRowPrefix().length() + SortSkew.SORTSKEW_LENGTH;
+
     public static Range getRange() {
       return section.getRange();
     }
 
-    public static String getRowPrefix() {
-      return section.getRowPrefix();
+    public static String encodeRow(String value) {
+      return section.getRowPrefix() + SortSkew.getCode(value) + value;
+    }
+
+    public static String decodeRow(String row) {
+      return row.substring(encoded_prefix_length);
+    }
+
+    /**
+     * Value to indicate that the row has been skewed/encoded.
+     */
+    public static class SkewedKeyValue {
+      public static final String STR_NAME = "skewed";
+      public static final Value NAME = new Value(STR_NAME);
     }
 
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataTime.java b/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataTime.java
new file mode 100644
index 0000000..77c7ec2
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataTime.java
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.metadata.schema;
+
+import java.util.Objects;
+
+import org.apache.accumulo.core.client.admin.TimeType;
+
+/**
+ * Immutable metadata time object
+ */
+public final class MetadataTime implements Comparable<MetadataTime> {
+  private final long time;
+  private final TimeType type;
+
+  public MetadataTime(long time, TimeType type) {
+    this.time = time;
+    this.type = type;
+  }
+
+  /**
+   * Creates a MetadataTime object from a string
+   *
+   * @param timestr
+   *          string representation of a metatdata time, ex. "M12345678"
+   * @return a MetadataTime object represented by string
+   */
+
+  public static MetadataTime parse(String timestr) throws IllegalArgumentException {
+
+    if (timestr != null && timestr.length() > 1) {
+      return new MetadataTime(Long.parseLong(timestr.substring(1)), getType(timestr.charAt(0)));
+    } else
+      throw new IllegalArgumentException("Unknown metadata time value " + timestr);
+  }
+
+  /**
+   * Converts timetypes to data codes used in the table data implementation
+   *
+   * @param code
+   *          character M or L otherwise exception thrown
+   * @return a TimeType {@link TimeType} represented by code.
+   */
+  public static TimeType getType(char code) {
+    switch (code) {
+      case 'M':
+        return TimeType.MILLIS;
+      case 'L':
+        return TimeType.LOGICAL;
+      default:
+        throw new IllegalArgumentException("Unknown time type code : " + code);
+    }
+  }
+
+  /**
+   * @return the single char code of this objects timeType
+   */
+  public static char getCode(TimeType type) {
+    switch (type) {
+      case MILLIS:
+        return 'M';
+      case LOGICAL:
+        return 'L';
+      default: // this should never happen
+        throw new IllegalArgumentException("Unknown time type: " + type);
+    }
+  }
+
+  public char getCode() {
+    return getCode(this.type);
+  }
+
+  public String encode() {
+    return "" + getCode() + time;
+  }
+
+  public TimeType getType() {
+    return type;
+  }
+
+  public long getTime() {
+    return time;
+  }
+
+  @Override
+  public boolean equals(Object o) {
+    if (o instanceof MetadataTime) {
+      MetadataTime t = (MetadataTime) o;
+      return time == t.getTime() && type == t.getType();
+    }
+    return false;
+  }
+
+  @Override
+  public int hashCode() {
+    return Objects.hash(time, type);
+  }
+
+  @Override
+  public int compareTo(MetadataTime mtime) {
+    if (this.type.equals(mtime.getType()))
+      return Long.compare(this.time, mtime.getTime());
+    else
+      throw new IllegalArgumentException(
+          "Cannot compare different time types: " + this + " and " + mtime);
+  }
+
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/schema/RootTabletMetadata.java b/core/src/main/java/org/apache/accumulo/core/metadata/schema/RootTabletMetadata.java
new file mode 100644
index 0000000..33790ec
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/schema/RootTabletMetadata.java
@@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.metadata.schema;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+import java.util.Arrays;
+import java.util.EnumSet;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.TreeMap;
+
+import org.apache.accumulo.core.client.admin.TimeType;
+import org.apache.accumulo.core.data.ArrayByteSequence;
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.ColumnUpdate;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.DataFileColumnFamily;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType;
+import org.apache.hadoop.io.Text;
+
+import com.google.common.base.Preconditions;
+import com.google.gson.Gson;
+import com.google.gson.GsonBuilder;
+
+/**
+ * Serializes the root tablet metadata as Json using Accumulo's standard metadata table schema.
+ */
+public class RootTabletMetadata {
+
+  private static final Gson GSON = new GsonBuilder().setPrettyPrinting().create();
+
+  private static final ByteSequence CURR_LOC_FAM =
+      new ArrayByteSequence(TabletsSection.CurrentLocationColumnFamily.STR_NAME);
+  private static final ByteSequence FUTURE_LOC_FAM =
+      new ArrayByteSequence(TabletsSection.FutureLocationColumnFamily.STR_NAME);
+
+  private TreeMap<Key,Value> entries = new TreeMap<>();
+
+  // This class is used to serialize and deserialize root tablet metadata using GSon. Any changes to
+  // this class must consider persisted data.
+  private static class GSonData {
+    int version = 1;
+
+    // Map<column_family, Map<column_qualifier, value>>
+    Map<String,Map<String,String>> columnValues;
+  }
+
+  /**
+   * Apply a metadata table mutation to update internal json.
+   */
+  public void update(Mutation m) {
+    Preconditions.checkArgument(new Text(m.getRow()).equals(RootTable.EXTENT.getMetadataEntry()));
+
+    m.getUpdates().forEach(cup -> {
+      Preconditions.checkArgument(!cup.hasTimestamp());
+      Preconditions.checkArgument(cup.getColumnVisibility().length == 0);
+    });
+
+    for (ColumnUpdate cup : m.getUpdates()) {
+      Key newKey = new Key(m.getRow(), cup.getColumnFamily(), cup.getColumnQualifier(),
+          cup.getColumnVisibility(), 1, false, false);
+
+      if (cup.isDeleted()) {
+        entries.remove(newKey);
+      } else {
+        entries.put(newKey, new Value(cup.getValue()));
+      }
+    }
+
+    // Ensure there is ever only one location
+    long locsSeen = entries.keySet().stream().map(Key::getColumnFamilyData)
+        .filter(fam -> fam.equals(CURR_LOC_FAM) || fam.equals(FUTURE_LOC_FAM)).count();
+
+    if (locsSeen > 1) {
+      throw new IllegalStateException(
+          "After mutation, root tablet has multiple locations : " + m + " " + entries);
+    }
+
+  }
+
+  /**
+   * Convert json to tablet metadata. *
+   */
+  public TabletMetadata convertToTabletMetadata() {
+    return TabletMetadata.convertRow(entries.entrySet().iterator(), EnumSet.allOf(ColumnType.class),
+        false);
+  }
+
+  private static String bs2Str(byte[] bs) {
+    String str = new String(bs, UTF_8);
+
+    // The expectation is that all data stored in the root tablet can be converted to UTF8. This is
+    // a sanity check to ensure the byte sequence can be converted from byte[] to UTF8 to byte[] w/o
+    // data corruption. Not all byte arrays can be converted to UTF8.
+    Preconditions.checkArgument(Arrays.equals(bs, str.getBytes(UTF_8)),
+        "Unsuccessful conversion of %s to utf8", str);
+
+    return str;
+  }
+
+  /**
+   * @return a json representation of this object, use {@link #fromJson(String)} to convert the json
+   *         back to an object.
+   */
+  public String toJson() {
+    GSonData gd = new GSonData();
+    gd.columnValues = new TreeMap<>();
+
+    Set<Entry<Key,Value>> es = entries.entrySet();
+    for (Entry<Key,Value> entry : es) {
+      String fam = bs2Str(entry.getKey().getColumnFamilyData().toArray());
+      String qual = bs2Str(entry.getKey().getColumnQualifierData().toArray());
+      String val = bs2Str(entry.getValue().get());
+
+      gd.columnValues.computeIfAbsent(fam, k -> new TreeMap<>()).put(qual, val);
+    }
+
+    return GSON.toJson(gd);
+  }
+
+  /**
+   * Converts created by calling {@link #toJson()} back to an object.
+   */
+  public static RootTabletMetadata fromJson(String json) {
+    GSonData gd = GSON.fromJson(json, GSonData.class);
+
+    Preconditions.checkArgument(gd.version == 1);
+
+    String row = RootTable.EXTENT.getMetadataEntry().toString();
+
+    TreeMap<Key,Value> entries = new TreeMap<>();
+
+    gd.columnValues.forEach((fam, qualVals) -> {
+      qualVals.forEach((qual, val) -> {
+        Key k = new Key(row, fam, qual, 1);
+        Value v = new Value(val);
+
+        entries.put(k, v);
+      });
+    });
+
+    RootTabletMetadata rtm = new RootTabletMetadata();
+    rtm.entries = entries;
+
+    return rtm;
+  }
+
+  /**
+   * Converts created by calling {@link #toJson()} back to an object. Assumes the json is UTF8
+   * encoded.
+   */
+  public static RootTabletMetadata fromJson(byte[] bs) {
+    return fromJson(new String(bs, UTF_8));
+  }
+
+  /**
+   * Generate initial json for the root tablet metadata.
+   */
+  public static byte[] getInitialJson(String dir, String file) {
+    Mutation mutation = RootTable.EXTENT.getPrevRowUpdateMutation();
+    ServerColumnFamily.DIRECTORY_COLUMN.put(mutation, new Value(dir.getBytes(UTF_8)));
+
+    mutation.put(DataFileColumnFamily.STR_NAME, file, new DataFileValue(0, 0).encodeAsValue());
+
+    ServerColumnFamily.TIME_COLUMN.put(mutation,
+        new Value(new MetadataTime(0, TimeType.LOGICAL).encode()));
+
+    RootTabletMetadata rtm = new RootTabletMetadata();
+
+    rtm.update(mutation);
+
+    return rtm.toJson().getBytes(UTF_8);
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/schema/SortSkew.java b/core/src/main/java/org/apache/accumulo/core/metadata/schema/SortSkew.java
new file mode 100644
index 0000000..665139c
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/schema/SortSkew.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.metadata.schema;
+
+import static com.google.common.hash.Hashing.murmur3_32;
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+import com.google.common.base.Strings;
+
+/*
+ * A subprefix used to remove sort skew from some of the metadata generated entries, for example: file deletes
+ * prefixed with ~del.  NOTE:  This is persisted data so any change to this processing should
+ * consider any existing data.
+ */
+public class SortSkew {
+
+  // A specified length for the skew code used is necessary to parse the key correctly.
+  // The Hex code for an integer will always be <= 8
+  public static final int SORTSKEW_LENGTH = Integer.BYTES * 2;
+
+  /**
+   * Creates a left justified hex string for the path hashcode of a deterministic length, therefore
+   * if necessary it is right padded with zeros
+   *
+   * @param keypart
+   *          value to be coded
+   * @return coded value of keypart
+   */
+  public static String getCode(String keypart) {
+    int hashCode = murmur3_32().hashString(keypart, UTF_8).asInt();
+    return Strings.padStart(Integer.toHexString(hashCode), SORTSKEW_LENGTH, '0');
+  }
+
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/schema/TabletMetadata.java b/core/src/main/java/org/apache/accumulo/core/metadata/schema/TabletMetadata.java
index 3929425..025d4be 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/schema/TabletMetadata.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/schema/TabletMetadata.java
@@ -21,7 +21,9 @@
 import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily.DIRECTORY_QUAL;
 import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily.FLUSH_QUAL;
 import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily.TIME_QUAL;
+import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.TabletColumnFamily.OLD_PREV_ROW_QUAL;
 import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.TabletColumnFamily.PREV_ROW_QUAL;
+import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.TabletColumnFamily.SPLIT_RATIO_QUAL;
 
 import java.util.Collection;
 import java.util.EnumSet;
@@ -31,7 +33,6 @@
 import java.util.Map.Entry;
 import java.util.Objects;
 import java.util.OptionalLong;
-import java.util.Set;
 import java.util.SortedMap;
 import java.util.function.Function;
 
@@ -60,9 +61,7 @@
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 import com.google.common.collect.ImmutableList;
-import com.google.common.collect.ImmutableList.Builder;
 import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableSet;
 import com.google.common.collect.ImmutableSortedMap;
 import com.google.common.collect.Iterators;
 
@@ -71,28 +70,44 @@
   private TableId tableId;
   private Text prevEndRow;
   private boolean sawPrevEndRow = false;
+  private Text oldPrevEndRow;
+  private boolean sawOldPrevEndRow = false;
   private Text endRow;
   private Location location;
   private Map<String,DataFileValue> files;
   private List<String> scans;
-  private Set<String> loadedFiles;
-  private EnumSet<FetchedColumns> fetchedCols;
+  private Map<String,Long> loadedFiles;
+  private EnumSet<ColumnType> fetchedCols;
   private KeyExtent extent;
   private Location last;
   private String dir;
-  private String time;
+  private MetadataTime time;
   private String cloned;
   private SortedMap<Key,Value> keyValues;
   private OptionalLong flush = OptionalLong.empty();
   private List<LogEntry> logs;
   private OptionalLong compact = OptionalLong.empty();
+  private Double splitRatio = null;
 
-  public static enum LocationType {
+  public enum LocationType {
     CURRENT, FUTURE, LAST
   }
 
-  public static enum FetchedColumns {
-    LOCATION, PREV_ROW, FILES, LAST, LOADED, SCANS, DIR, TIME, CLONED, FLUSH_ID, LOGS, COMPACT_ID
+  public enum ColumnType {
+    LOCATION,
+    PREV_ROW,
+    OLD_PREV_ROW,
+    FILES,
+    LAST,
+    LOADED,
+    SCANS,
+    DIR,
+    TIME,
+    CLONED,
+    FLUSH_ID,
+    LOGS,
+    COMPACT_ID,
+    SPLIT_RATIO
   }
 
   public static class Location {
@@ -142,99 +157,120 @@
     return extent;
   }
 
-  private void ensureFetched(FetchedColumns col) {
+  private void ensureFetched(ColumnType col) {
     Preconditions.checkState(fetchedCols.contains(col), "%s was not fetched", col);
   }
 
   public Text getPrevEndRow() {
-    ensureFetched(FetchedColumns.PREV_ROW);
-    if (!sawPrevEndRow)
+    ensureFetched(ColumnType.PREV_ROW);
+    if (!sawPrevEndRow) {
       throw new IllegalStateException(
           "No prev endrow seen.  tableId: " + tableId + " endrow: " + endRow);
+    }
     return prevEndRow;
   }
 
   public boolean sawPrevEndRow() {
-    ensureFetched(FetchedColumns.PREV_ROW);
+    ensureFetched(ColumnType.PREV_ROW);
     return sawPrevEndRow;
   }
 
+  public Text getOldPrevEndRow() {
+    ensureFetched(ColumnType.OLD_PREV_ROW);
+    if (!sawOldPrevEndRow) {
+      throw new IllegalStateException(
+          "No old prev endrow seen.  tableId: " + tableId + " endrow: " + endRow);
+    }
+    return oldPrevEndRow;
+  }
+
+  public boolean sawOldPrevEndRow() {
+    ensureFetched(ColumnType.OLD_PREV_ROW);
+    return sawOldPrevEndRow;
+  }
+
   public Text getEndRow() {
     return endRow;
   }
 
   public Location getLocation() {
-    ensureFetched(FetchedColumns.LOCATION);
+    ensureFetched(ColumnType.LOCATION);
     return location;
   }
 
   public boolean hasCurrent() {
-    ensureFetched(FetchedColumns.LOCATION);
+    ensureFetched(ColumnType.LOCATION);
     return location != null && location.getType() == LocationType.CURRENT;
   }
 
-  public Set<String> getLoaded() {
-    ensureFetched(FetchedColumns.LOADED);
+  public Map<String,Long> getLoaded() {
+    ensureFetched(ColumnType.LOADED);
     return loadedFiles;
   }
 
   public Location getLast() {
-    ensureFetched(FetchedColumns.LAST);
+    ensureFetched(ColumnType.LAST);
     return last;
   }
 
   public Collection<String> getFiles() {
-    ensureFetched(FetchedColumns.FILES);
+    ensureFetched(ColumnType.FILES);
     return files.keySet();
   }
 
   public Map<String,DataFileValue> getFilesMap() {
-    ensureFetched(FetchedColumns.FILES);
+    ensureFetched(ColumnType.FILES);
     return files;
   }
 
   public Collection<LogEntry> getLogs() {
-    ensureFetched(FetchedColumns.LOGS);
+    ensureFetched(ColumnType.LOGS);
     return logs;
   }
 
   public List<String> getScans() {
-    ensureFetched(FetchedColumns.SCANS);
+    ensureFetched(ColumnType.SCANS);
     return scans;
   }
 
   public String getDir() {
-    ensureFetched(FetchedColumns.DIR);
+    ensureFetched(ColumnType.DIR);
     return dir;
   }
 
-  public String getTime() {
-    ensureFetched(FetchedColumns.TIME);
+  public MetadataTime getTime() {
+    ensureFetched(ColumnType.TIME);
     return time;
   }
 
   public String getCloned() {
-    ensureFetched(FetchedColumns.CLONED);
+    ensureFetched(ColumnType.CLONED);
     return cloned;
   }
 
   public OptionalLong getFlushId() {
-    ensureFetched(FetchedColumns.FLUSH_ID);
+    ensureFetched(ColumnType.FLUSH_ID);
     return flush;
   }
 
   public OptionalLong getCompactId() {
-    ensureFetched(FetchedColumns.COMPACT_ID);
+    ensureFetched(ColumnType.COMPACT_ID);
     return compact;
   }
 
+  public Double getSplitRatio() {
+    ensureFetched(ColumnType.SPLIT_RATIO);
+    return splitRatio;
+  }
+
   public SortedMap<Key,Value> getKeyValues() {
     Preconditions.checkState(keyValues != null, "Requested key values when it was not saved");
     return keyValues;
   }
 
-  static TabletMetadata convertRow(Iterator<Entry<Key,Value>> rowIter,
-      EnumSet<FetchedColumns> fetchedColumns, boolean buildKeyValueMap) {
+  @VisibleForTesting
+  public static TabletMetadata convertRow(Iterator<Entry<Key,Value>> rowIter,
+      EnumSet<ColumnType> fetchedColumns, boolean buildKeyValueMap) {
     Objects.requireNonNull(rowIter);
 
     TabletMetadata te = new TabletMetadata();
@@ -243,10 +279,10 @@
       kvBuilder = ImmutableSortedMap.naturalOrder();
     }
 
-    ImmutableMap.Builder<String,DataFileValue> filesBuilder = ImmutableMap.builder();
-    Builder<String> scansBuilder = ImmutableList.builder();
-    Builder<LogEntry> logsBuilder = ImmutableList.builder();
-    final ImmutableSet.Builder<String> loadedFilesBuilder = ImmutableSet.builder();
+    var filesBuilder = ImmutableMap.<String,DataFileValue>builder();
+    var scansBuilder = ImmutableList.<String>builder();
+    var logsBuilder = ImmutableList.<LogEntry>builder();
+    final var loadedFilesBuilder = ImmutableMap.<String,Long>builder();
     ByteSequence row = null;
 
     while (rowIter.hasNext()) {
@@ -272,9 +308,18 @@
 
       switch (fam.toString()) {
         case TabletColumnFamily.STR_NAME:
-          if (PREV_ROW_QUAL.equals(qual)) {
-            te.prevEndRow = KeyExtent.decodePrevEndRow(kv.getValue());
-            te.sawPrevEndRow = true;
+          switch (qual) {
+            case PREV_ROW_QUAL:
+              te.prevEndRow = KeyExtent.decodePrevEndRow(kv.getValue());
+              te.sawPrevEndRow = true;
+              break;
+            case OLD_PREV_ROW_QUAL:
+              te.oldPrevEndRow = KeyExtent.decodePrevEndRow(kv.getValue());
+              te.sawOldPrevEndRow = true;
+              break;
+            case SPLIT_RATIO_QUAL:
+              te.splitRatio = Double.parseDouble(val);
+              break;
           }
           break;
         case ServerColumnFamily.STR_NAME:
@@ -283,7 +328,7 @@
               te.dir = val;
               break;
             case TIME_QUAL:
-              te.time = val;
+              te.time = MetadataTime.parse(val);
               break;
             case FLUSH_QUAL:
               te.flush = OptionalLong.of(Long.parseLong(val));
@@ -297,7 +342,7 @@
           filesBuilder.put(qual, new DataFileValue(val));
           break;
         case BulkFileColumnFamily.STR_NAME:
-          loadedFilesBuilder.add(qual);
+          loadedFilesBuilder.put(qual, BulkFileColumnFamily.getBulkLoadTid(val));
           break;
         case CurrentLocationColumnFamily.STR_NAME:
           te.setLocationOnce(val, qual, LocationType.CURRENT);
@@ -334,13 +379,14 @@
   }
 
   private void setLocationOnce(String val, String qual, LocationType lt) {
-    if (location != null)
+    if (location != null) {
       throw new IllegalStateException("Attempted to set second location for tableId: " + tableId
           + " endrow: " + endRow + " -- " + location + " " + qual + " " + val);
+    }
     location = new Location(val, qual, lt);
   }
 
-  static Iterable<TabletMetadata> convert(Scanner input, EnumSet<FetchedColumns> fetchedColumns,
+  static Iterable<TabletMetadata> convert(Scanner input, EnumSet<ColumnType> fetchedColumns,
       boolean checkConsistency, boolean buildKeyValueMap) {
 
     Range range = input.getRange();
@@ -367,7 +413,7 @@
     te.sawPrevEndRow = true;
     te.prevEndRow = prevEndRow == null ? null : new Text(prevEndRow);
     te.endRow = endRow == null ? null : new Text(endRow);
-    te.fetchedCols = EnumSet.of(FetchedColumns.PREV_ROW);
+    te.fetchedCols = EnumSet.of(ColumnType.PREV_ROW);
     return te;
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/schema/TabletsMetadata.java b/core/src/main/java/org/apache/accumulo/core/metadata/schema/TabletsMetadata.java
index 0843f62..1ae3229 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/schema/TabletsMetadata.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/schema/TabletsMetadata.java
@@ -24,6 +24,7 @@
 import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN;
 
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.EnumSet;
 import java.util.Iterator;
 import java.util.List;
@@ -35,11 +36,13 @@
 import org.apache.accumulo.core.client.IsolatedScanner;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.clientImpl.ClientContext;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.TableId;
 import org.apache.accumulo.core.dataImpl.KeyExtent;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.Ample.DataLevel;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.BulkFileColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ClonedColumnFamily;
@@ -49,9 +52,10 @@
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LastLocationColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LogColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ScanFileColumnFamily;
-import org.apache.accumulo.core.metadata.schema.TabletMetadata.FetchedColumns;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.ColumnFQ;
+import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.hadoop.io.Text;
 
 import com.google.common.base.Preconditions;
@@ -66,9 +70,10 @@
 
     private List<Text> families = new ArrayList<>();
     private List<ColumnFQ> qualifiers = new ArrayList<>();
-    private String table = MetadataTable.NAME;
+    private Ample.DataLevel level;
+    private String table;
     private Range range;
-    private EnumSet<FetchedColumns> fetchedCols = EnumSet.noneOf(FetchedColumns.class);
+    private EnumSet<ColumnType> fetchedCols = EnumSet.noneOf(ColumnType.class);
     private Text endRow;
     private boolean checkConsistency = false;
     private boolean saveKeyValues;
@@ -76,12 +81,28 @@
 
     @Override
     public TabletsMetadata build(AccumuloClient client) {
+      Preconditions.checkState(level == null ^ table == null);
+      if (level == DataLevel.ROOT) {
+        ClientContext ctx = ((ClientContext) client);
+        ZooCache zc = ctx.getZooCache();
+        String zkRoot = ctx.getZooKeeperRoot();
+        return new TabletsMetadata(getRootMetadata(zkRoot, zc));
+      } else {
+        return buildNonRoot(client);
+      }
+    }
+
+    private TabletsMetadata buildNonRoot(AccumuloClient client) {
       try {
-        Scanner scanner = new IsolatedScanner(client.createScanner(table, Authorizations.EMPTY));
+
+        String resolvedTable = table == null ? level.metaTable() : table;
+
+        Scanner scanner =
+            new IsolatedScanner(client.createScanner(resolvedTable, Authorizations.EMPTY));
         scanner.setRange(range);
 
-        if (checkConsistency && !fetchedCols.contains(FetchedColumns.PREV_ROW)) {
-          fetchPrev();
+        if (checkConsistency && !fetchedCols.contains(ColumnType.PREV_ROW)) {
+          fetch(ColumnType.PREV_ROW);
         }
 
         for (Text fam : families) {
@@ -93,7 +114,7 @@
         }
 
         if (families.size() == 0 && qualifiers.size() == 0) {
-          fetchedCols = EnumSet.allOf(FetchedColumns.class);
+          fetchedCols = EnumSet.allOf(ColumnType.class);
         }
 
         Iterable<TabletMetadata> tmi =
@@ -118,98 +139,68 @@
     }
 
     @Override
-    public Options fetchCloned() {
-      fetchedCols.add(FetchedColumns.CLONED);
-      families.add(ClonedColumnFamily.NAME);
-      return this;
-    }
+    public Options fetch(ColumnType... colsToFetch) {
+      Preconditions.checkArgument(colsToFetch.length > 0);
 
-    @Override
-    public Options fetchCompactId() {
-      fetchedCols.add(FetchedColumns.COMPACT_ID);
-      qualifiers.add(COMPACT_COLUMN);
-      return this;
-    }
+      for (ColumnType colToFetch : colsToFetch) {
 
-    @Override
-    public Options fetchDir() {
-      fetchedCols.add(FetchedColumns.DIR);
-      qualifiers.add(DIRECTORY_COLUMN);
-      return this;
-    }
+        fetchedCols.add(colToFetch);
 
-    @Override
-    public Options fetchFiles() {
-      fetchedCols.add(FetchedColumns.FILES);
-      families.add(DataFileColumnFamily.NAME);
-      return this;
-    }
+        switch (colToFetch) {
+          case CLONED:
+            families.add(ClonedColumnFamily.NAME);
+            break;
+          case COMPACT_ID:
+            qualifiers.add(COMPACT_COLUMN);
+            break;
+          case DIR:
+            qualifiers.add(DIRECTORY_COLUMN);
+            break;
+          case FILES:
+            families.add(DataFileColumnFamily.NAME);
+            break;
+          case FLUSH_ID:
+            qualifiers.add(FLUSH_COLUMN);
+            break;
+          case LAST:
+            families.add(LastLocationColumnFamily.NAME);
+            break;
+          case LOADED:
+            families.add(BulkFileColumnFamily.NAME);
+            break;
+          case LOCATION:
+            families.add(CurrentLocationColumnFamily.NAME);
+            families.add(FutureLocationColumnFamily.NAME);
+            break;
+          case LOGS:
+            families.add(LogColumnFamily.NAME);
+            break;
+          case PREV_ROW:
+            qualifiers.add(PREV_ROW_COLUMN);
+            break;
+          case SCANS:
+            families.add(ScanFileColumnFamily.NAME);
+            break;
+          case TIME:
+            qualifiers.add(TIME_COLUMN);
+            break;
+          default:
+            throw new IllegalArgumentException("Unknown col type " + colToFetch);
 
-    @Override
-    public Options fetchFlushId() {
-      fetchedCols.add(FetchedColumns.FLUSH_ID);
-      qualifiers.add(FLUSH_COLUMN);
-      return this;
-    }
+        }
+      }
 
-    @Override
-    public Options fetchLast() {
-      fetchedCols.add(FetchedColumns.LAST);
-      families.add(LastLocationColumnFamily.NAME);
-      return this;
-    }
-
-    @Override
-    public Options fetchLoaded() {
-      fetchedCols.add(FetchedColumns.LOADED);
-      families.add(BulkFileColumnFamily.NAME);
-      return this;
-    }
-
-    @Override
-    public Options fetchLocation() {
-      fetchedCols.add(FetchedColumns.LOCATION);
-      families.add(CurrentLocationColumnFamily.NAME);
-      families.add(FutureLocationColumnFamily.NAME);
-      return this;
-    }
-
-    @Override
-    public Options fetchLogs() {
-      fetchedCols.add(FetchedColumns.LOGS);
-      families.add(LogColumnFamily.NAME);
-      return this;
-    }
-
-    @Override
-    public Options fetchPrev() {
-      fetchedCols.add(FetchedColumns.PREV_ROW);
-      qualifiers.add(PREV_ROW_COLUMN);
-      return this;
-    }
-
-    @Override
-    public Options fetchScans() {
-      fetchedCols.add(FetchedColumns.SCANS);
-      families.add(ScanFileColumnFamily.NAME);
-      return this;
-    }
-
-    @Override
-    public Options fetchTime() {
-      fetchedCols.add(FetchedColumns.TIME);
-      qualifiers.add(TIME_COLUMN);
       return this;
     }
 
     @Override
     public TableRangeOptions forTable(TableId tableId) {
-      Preconditions.checkArgument(!tableId.equals(RootTable.ID),
-          "Getting tablet metadata for " + RootTable.NAME + " not supported at this time.");
-      if (tableId.equals(MetadataTable.ID)) {
-        this.table = RootTable.NAME;
+      if (tableId.equals(RootTable.ID)) {
+        this.level = DataLevel.ROOT;
+      } else if (tableId.equals(MetadataTable.ID)) {
+        this.level = DataLevel.METADATA;
       } else {
-        this.table = MetadataTable.NAME;
+        this.level = DataLevel.USER;
       }
 
       this.tableId = tableId;
@@ -260,29 +251,7 @@
      */
     Options checkConsistency();
 
-    Options fetchCloned();
-
-    Options fetchCompactId();
-
-    Options fetchDir();
-
-    Options fetchFiles();
-
-    Options fetchFlushId();
-
-    Options fetchLast();
-
-    Options fetchLoaded();
-
-    Options fetchLocation();
-
-    Options fetchLogs();
-
-    Options fetchPrev();
-
-    Options fetchScans();
-
-    Options fetchTime();
+    Options fetch(ColumnType... columnsToFetch);
 
     /**
      * Saves the key values seen in the metadata table for each tablet.
@@ -378,10 +347,20 @@
     return new Builder();
   }
 
+  public static TabletMetadata getRootMetadata(String zkRoot, ZooCache zc) {
+    return RootTabletMetadata.fromJson(zc.get(zkRoot + RootTable.ZROOT_TABLET))
+        .convertToTabletMetadata();
+  }
+
   private Scanner scanner;
 
   private Iterable<TabletMetadata> tablets;
 
+  private TabletsMetadata(TabletMetadata tm) {
+    this.scanner = null;
+    this.tablets = Collections.singleton(tm);
+  }
+
   private TabletsMetadata(Scanner scanner, Iterable<TabletMetadata> tmi) {
     this.scanner = scanner;
     this.tablets = tmi;
@@ -389,7 +368,9 @@
 
   @Override
   public void close() {
-    scanner.close();
+    if (scanner != null) {
+      scanner.close();
+    }
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/replication/ReplicationTable.java b/core/src/main/java/org/apache/accumulo/core/replication/ReplicationTable.java
index 91d4240..23fd7c5 100644
--- a/core/src/main/java/org/apache/accumulo/core/replication/ReplicationTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/replication/ReplicationTable.java
@@ -41,8 +41,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.collect.ImmutableMap;
-
 public class ReplicationTable {
   private static final Logger log = LoggerFactory.getLogger(ReplicationTable.class);
 
@@ -56,7 +54,7 @@
   public static final String WORK_LG_NAME = WorkSection.NAME.toString();
   public static final Set<Text> WORK_LG_COLFAMS = Collections.singleton(WorkSection.NAME);
   public static final Map<String,Set<Text>> LOCALITY_GROUPS =
-      ImmutableMap.of(STATUS_LG_NAME, STATUS_LG_COLFAMS, WORK_LG_NAME, WORK_LG_COLFAMS);
+      Map.of(STATUS_LG_NAME, STATUS_LG_COLFAMS, WORK_LG_NAME, WORK_LG_COLFAMS);
 
   public static Scanner getScanner(AccumuloClient client) throws ReplicationTableOfflineException {
     try {
diff --git a/core/src/main/java/org/apache/accumulo/core/rpc/SslConnectionParams.java b/core/src/main/java/org/apache/accumulo/core/rpc/SslConnectionParams.java
index 5d9ca7e..93700fd 100644
--- a/core/src/main/java/org/apache/accumulo/core/rpc/SslConnectionParams.java
+++ b/core/src/main/java/org/apache/accumulo/core/rpc/SslConnectionParams.java
@@ -23,7 +23,6 @@
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.commons.lang3.StringUtils;
 import org.apache.thrift.transport.TSSLTransportFactory.TSSLTransportParameters;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -80,11 +79,11 @@
 
     String ciphers = conf.get(Property.RPC_SSL_CIPHER_SUITES);
     if (ciphers != null && !ciphers.isEmpty()) {
-      result.cipherSuites = StringUtils.split(ciphers, ',');
+      result.cipherSuites = ciphers.split(",");
     }
 
     String enabledProtocols = conf.get(Property.RPC_SSL_ENABLED_PROTOCOLS);
-    result.serverProtocols = StringUtils.split(enabledProtocols, ',');
+    result.serverProtocols = enabledProtocols.split(",");
 
     result.clientProtocol = conf.get(Property.RPC_SSL_CLIENT_PROTOCOL);
 
diff --git a/core/src/main/java/org/apache/accumulo/core/sample/impl/SamplerFactory.java b/core/src/main/java/org/apache/accumulo/core/sample/impl/SamplerFactory.java
index 0d741a6..42c390d 100644
--- a/core/src/main/java/org/apache/accumulo/core/sample/impl/SamplerFactory.java
+++ b/core/src/main/java/org/apache/accumulo/core/sample/impl/SamplerFactory.java
@@ -40,13 +40,13 @@
       else
         clazz = AccumuloVFSClassLoader.loadClass(config.getClassName(), Sampler.class);
 
-      Sampler sampler = clazz.newInstance();
+      Sampler sampler = clazz.getDeclaredConstructor().newInstance();
 
       sampler.init(config.toSamplerConfiguration());
 
       return sampler;
 
-    } catch (ClassNotFoundException | InstantiationException | IllegalAccessException e) {
+    } catch (ReflectiveOperationException e) {
       throw new RuntimeException(e);
     }
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/singletons/SingletonReservation.java b/core/src/main/java/org/apache/accumulo/core/singletons/SingletonReservation.java
index 67242af..cb542cb 100644
--- a/core/src/main/java/org/apache/accumulo/core/singletons/SingletonReservation.java
+++ b/core/src/main/java/org/apache/accumulo/core/singletons/SingletonReservation.java
@@ -17,6 +17,11 @@
 
 package org.apache.accumulo.core.singletons;
 
+import java.lang.ref.Cleaner.Cleanable;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.accumulo.core.client.AccumuloClient;
+import org.apache.accumulo.core.util.cleaner.CleanerUtil;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -25,41 +30,34 @@
  */
 public class SingletonReservation implements AutoCloseable {
 
-  // volatile so finalize does not need to synchronize to reliably read
-  protected volatile boolean closed = false;
+  private static final Logger log = LoggerFactory.getLogger(SingletonReservation.class);
 
-  private static Logger log = LoggerFactory.getLogger(SingletonReservation.class);
-
-  private final Exception e;
+  // AtomicBoolean so cleaner doesn't need to synchronize to reliably read
+  private final AtomicBoolean closed = new AtomicBoolean(false);
+  private final Cleanable cleanable;
 
   public SingletonReservation() {
-    e = new Exception();
+    cleanable = CleanerUtil.unclosed(this, AccumuloClient.class, closed, log, null);
   }
 
   @Override
-  public synchronized void close() {
-    if (closed) {
-      return;
-    }
-    closed = true;
-    SingletonManager.releaseRerservation();
-  }
-
-  @Override
-  protected void finalize() throws Throwable {
-    try {
-      if (!closed) {
-        log.warn("An Accumulo Client was garbage collected without being closed.", e);
-      }
-    } finally {
-      super.finalize();
+  public void close() {
+    if (closed.compareAndSet(false, true)) {
+      // deregister cleanable, but it won't run because it checks
+      // the value of closed first, which is now true
+      cleanable.clean();
+      SingletonManager.releaseRerservation();
     }
   }
 
   private static class NoopSingletonReservation extends SingletonReservation {
     NoopSingletonReservation() {
-      closed = true;
+      super();
+      super.closed.set(true);
+      // deregister the cleaner
+      super.cleanable.clean();
     }
+
   }
 
   private static final SingletonReservation NOOP = new NoopSingletonReservation();
diff --git a/core/src/main/java/org/apache/accumulo/core/spi/scan/HintScanPrioritizer.java b/core/src/main/java/org/apache/accumulo/core/spi/scan/HintScanPrioritizer.java
index aa2dbb7..bd54d79 100644
--- a/core/src/main/java/org/apache/accumulo/core/spi/scan/HintScanPrioritizer.java
+++ b/core/src/main/java/org/apache/accumulo/core/spi/scan/HintScanPrioritizer.java
@@ -25,7 +25,6 @@
 import org.slf4j.LoggerFactory;
 
 import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableMap.Builder;
 
 /**
  * When configured for a scan executor, this prioritizer allows scanners to set priorities as
@@ -115,7 +114,7 @@
     int defaultPriority = Integer
         .parseInt(params.getOptions().getOrDefault("default_priority", Integer.MAX_VALUE + ""));
 
-    Builder<String,Integer> tpb = ImmutableMap.builder();
+    var tpb = ImmutableMap.<String,Integer>builder();
 
     params.getOptions().forEach((k, v) -> {
       if (k.startsWith(PRIO_PREFIX)) {
@@ -124,7 +123,7 @@
       }
     });
 
-    ImmutableMap<String,Integer> typePriorities = tpb.build();
+    Map<String,Integer> typePriorities = tpb.build();
 
     HintProblemAction hpa = HintProblemAction.valueOf(params.getOptions()
         .getOrDefault("bad_hint_action", HintProblemAction.LOG.name()).toUpperCase());
diff --git a/core/src/main/java/org/apache/accumulo/core/spi/scan/SimpleScanDispatcher.java b/core/src/main/java/org/apache/accumulo/core/spi/scan/SimpleScanDispatcher.java
index d698344..bd83ec2 100644
--- a/core/src/main/java/org/apache/accumulo/core/spi/scan/SimpleScanDispatcher.java
+++ b/core/src/main/java/org/apache/accumulo/core/spi/scan/SimpleScanDispatcher.java
@@ -22,8 +22,6 @@
 import org.apache.accumulo.core.client.ScannerBase;
 
 import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableMap.Builder;
-import com.google.common.collect.ImmutableSet;
 
 /**
  * If no options are given, then this will dispatch to an executor named {@code default}. This
@@ -50,8 +48,7 @@
 
   private final String EXECUTOR_PREFIX = "executor.";
 
-  private final Set<String> VALID_OPTS =
-      ImmutableSet.of("executor", "multi_executor", "single_executor");
+  private final Set<String> VALID_OPTS = Set.of("executor", "multi_executor", "single_executor");
   private String multiExecutor;
   private String singleExecutor;
 
@@ -63,7 +60,7 @@
   public void init(InitParameters params) {
     Map<String,String> options = params.getOptions();
 
-    Builder<String,String> teb = ImmutableMap.builder();
+    var teb = ImmutableMap.<String,String>builder();
 
     options.forEach((k, v) -> {
       if (k.startsWith(EXECUTOR_PREFIX)) {
diff --git a/core/src/main/java/org/apache/accumulo/core/summary/Gatherer.java b/core/src/main/java/org/apache/accumulo/core/summary/Gatherer.java
index b522487..65430b6 100644
--- a/core/src/main/java/org/apache/accumulo/core/summary/Gatherer.java
+++ b/core/src/main/java/org/apache/accumulo/core/summary/Gatherer.java
@@ -18,6 +18,10 @@
 package org.apache.accumulo.core.summary;
 
 import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.FILES;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LAST;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LOCATION;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.PREV_ROW;
 
 import java.util.ArrayList;
 import java.util.Collections;
@@ -163,9 +167,8 @@
   private Map<String,Map<String,List<TRowRange>>>
       getFilesGroupedByLocation(Predicate<String> fileSelector) {
 
-    Iterable<TabletMetadata> tmi =
-        TabletsMetadata.builder().forTable(tableId).overlapping(startRow, endRow).fetchFiles()
-            .fetchLocation().fetchLast().fetchPrev().build(ctx);
+    Iterable<TabletMetadata> tmi = TabletsMetadata.builder().forTable(tableId)
+        .overlapping(startRow, endRow).fetch(FILES, LOCATION, LAST, PREV_ROW).build(ctx);
 
     // get a subset of files
     Map<String,List<TabletMetadata>> files = new HashMap<>();
@@ -527,8 +530,8 @@
 
   private int countFiles() {
     // TODO use a batch scanner + iterator to parallelize counting files
-    return TabletsMetadata.builder().forTable(tableId).overlapping(startRow, endRow).fetchFiles()
-        .fetchPrev().build(ctx).stream().mapToInt(tm -> tm.getFiles().size()).sum();
+    return TabletsMetadata.builder().forTable(tableId).overlapping(startRow, endRow)
+        .fetch(FILES, PREV_ROW).build(ctx).stream().mapToInt(tm -> tm.getFiles().size()).sum();
   }
 
   private class GatherRequest implements Supplier<SummaryCollection> {
diff --git a/core/src/main/java/org/apache/accumulo/core/summary/SummarizerFactory.java b/core/src/main/java/org/apache/accumulo/core/summary/SummarizerFactory.java
index 09cb583..f226ef4 100644
--- a/core/src/main/java/org/apache/accumulo/core/summary/SummarizerFactory.java
+++ b/core/src/main/java/org/apache/accumulo/core/summary/SummarizerFactory.java
@@ -42,23 +42,24 @@
   }
 
   private Summarizer newSummarizer(String classname)
-      throws ClassNotFoundException, IOException, InstantiationException, IllegalAccessException {
+      throws IOException, ReflectiveOperationException {
     if (classloader != null) {
-      return classloader.loadClass(classname).asSubclass(Summarizer.class).newInstance();
+      return classloader.loadClass(classname).asSubclass(Summarizer.class).getDeclaredConstructor()
+          .newInstance();
     } else {
       if (context != null && !context.equals(""))
         return AccumuloVFSClassLoader.getContextManager()
-            .loadClass(context, classname, Summarizer.class).newInstance();
+            .loadClass(context, classname, Summarizer.class).getDeclaredConstructor().newInstance();
       else
-        return AccumuloVFSClassLoader.loadClass(classname, Summarizer.class).newInstance();
+        return AccumuloVFSClassLoader.loadClass(classname, Summarizer.class)
+            .getDeclaredConstructor().newInstance();
     }
   }
 
   public Summarizer getSummarizer(SummarizerConfiguration conf) {
     try {
       return newSummarizer(conf.getClassName());
-    } catch (ClassNotFoundException | InstantiationException | IllegalAccessException
-        | IOException e) {
+    } catch (ReflectiveOperationException | IOException e) {
       throw new RuntimeException(e);
     }
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/summary/SummarySerializer.java b/core/src/main/java/org/apache/accumulo/core/summary/SummarySerializer.java
index 253abd0..c3ad4d1 100644
--- a/core/src/main/java/org/apache/accumulo/core/summary/SummarySerializer.java
+++ b/core/src/main/java/org/apache/accumulo/core/summary/SummarySerializer.java
@@ -536,7 +536,7 @@
 
   private static Map<String,Long> readSummary(DataInputStream in, String[] symbols)
       throws IOException {
-    com.google.common.collect.ImmutableMap.Builder<String,Long> imb = ImmutableMap.builder();
+    var imb = ImmutableMap.<String,Long>builder();
     int numEntries = WritableUtils.readVInt(in);
 
     for (int i = 0; i < numEntries; i++) {
diff --git a/core/src/main/java/org/apache/accumulo/core/util/CreateToken.java b/core/src/main/java/org/apache/accumulo/core/util/CreateToken.java
index 27c4b70..5fc5cf0 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/CreateToken.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/CreateToken.java
@@ -18,7 +18,6 @@
 
 import java.io.IOException;
 
-import org.apache.accumulo.core.cli.ClientOpts.Password;
 import org.apache.accumulo.core.cli.ClientOpts.PasswordConverter;
 import org.apache.accumulo.core.cli.Help;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
@@ -50,11 +49,11 @@
 
     @Parameter(names = "-p", converter = PasswordConverter.class,
         description = "Connection password")
-    public Password password = null;
+    public String password = null;
 
     @Parameter(names = "--password", converter = PasswordConverter.class,
         description = "Enter the connection password", password = true)
-    public Password securePassword = null;
+    public String securePassword = null;
 
     @Parameter(names = {"-tc", "--tokenClass"},
         description = "The class of the authentication token")
@@ -80,7 +79,7 @@
     Opts opts = new Opts();
     opts.parseArgs("accumulo create-token", args);
 
-    Password pass = opts.password;
+    String pass = opts.password;
     if (pass == null && opts.securePassword != null) {
       pass = opts.securePassword;
     }
@@ -91,13 +90,13 @@
         principal = getConsoleReader().readLine("Username (aka principal): ");
       }
 
-      AuthenticationToken token =
-          Class.forName(opts.tokenClassName).asSubclass(AuthenticationToken.class).newInstance();
+      AuthenticationToken token = Class.forName(opts.tokenClassName)
+          .asSubclass(AuthenticationToken.class).getDeclaredConstructor().newInstance();
       Properties props = new Properties();
       for (TokenProperty tp : token.getProperties()) {
         String input;
         if (pass != null && tp.getKey().equals("password")) {
-          input = pass.toString();
+          input = pass;
         } else {
           if (tp.getMask()) {
             input = getConsoleReader().readLine(tp.getDescription() + ": ", '*');
@@ -111,8 +110,7 @@
       System.out.println("auth.type = " + opts.tokenClassName);
       System.out.println("auth.principal = " + principal);
       System.out.println("auth.token = " + ClientProperty.encodeToken(token));
-    } catch (IOException | InstantiationException | IllegalAccessException
-        | ClassNotFoundException e) {
+    } catch (IOException | ReflectiveOperationException e) {
       throw new RuntimeException(e);
     }
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/LocalityGroupUtil.java b/core/src/main/java/org/apache/accumulo/core/util/LocalityGroupUtil.java
index 2dd314d..e9d5abd 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/LocalityGroupUtil.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/LocalityGroupUtil.java
@@ -50,14 +50,13 @@
 
 import com.google.common.base.Joiner;
 import com.google.common.collect.ImmutableSet;
-import com.google.common.collect.ImmutableSet.Builder;
 
 public class LocalityGroupUtil {
 
   private static final Logger log = LoggerFactory.getLogger(LocalityGroupUtil.class);
 
   // using an ImmutableSet here for more efficient comparisons in LocalityGroupIterator
-  public static final ImmutableSet<ByteSequence> EMPTY_CF_SET = ImmutableSet.of();
+  public static final Set<ByteSequence> EMPTY_CF_SET = Set.of();
 
   /**
    * Create a set of families to be passed into the SortedKeyValueIterator seek call from a supplied
@@ -68,10 +67,11 @@
    *          The set of columns
    * @return An immutable set of columns
    */
-  public static ImmutableSet<ByteSequence> families(Collection<Column> columns) {
-    if (columns.size() == 0)
+  public static Set<ByteSequence> families(Collection<Column> columns) {
+    if (columns.size() == 0) {
       return EMPTY_CF_SET;
-    Builder<ByteSequence> builder = ImmutableSet.builder();
+    }
+    var builder = ImmutableSet.<ByteSequence>builder();
     columns.forEach(c -> builder.add(new ArrayByteSequence(c.getColumnFamily())));
     return builder.build();
   }
@@ -113,8 +113,9 @@
     Map<String,Set<ByteSequence>> result = new HashMap<>();
     String[] groups = acuconf.get(Property.TABLE_LOCALITY_GROUPS).split(",");
     for (String group : groups) {
-      if (group.length() > 0)
+      if (group.length() > 0) {
         result.put(group, new HashSet<>());
+      }
     }
     HashSet<ByteSequence> all = new HashSet<>();
     for (Entry<String,String> entry : acuconf) {
@@ -233,12 +234,13 @@
 
     for (int i = 0; i < len; i++) {
       int c = 0xff & ba[i];
-      if (c == '\\')
+      if (c == '\\') {
         sb.append("\\\\");
-      else if (c >= 32 && c <= 126 && c != ',')
+      } else if (c >= 32 && c <= 126 && c != ',') {
         sb.append((char) c);
-      else
+      } else {
         sb.append("\\x").append(String.format("%02X", c));
+      }
     }
 
     return sb.toString();
@@ -330,16 +332,19 @@
           }
 
           if (lgcount == 1) {
-            for (int i = 0; i < parts.length; i++)
+            for (int i = 0; i < parts.length; i++) {
               if (parts.get(i) != null) {
                 partitionedMutations.get(i).add(mutation);
                 break;
               }
+            }
           } else {
-            for (int i = 0; i < parts.length; i++)
-              if (parts.get(i) != null)
+            for (int i = 0; i < parts.length; i++) {
+              if (parts.get(i) != null) {
                 partitionedMutations.get(i)
                     .add(new PartitionedMutation(mutation.getRow(), parts.get(i)));
+              }
+            }
           }
         }
       }
@@ -348,8 +353,9 @@
     private Integer getLgid(MutableByteSequence mbs, ColumnUpdate cu) {
       mbs.setArray(cu.getColumnFamily(), 0, cu.getColumnFamily().length);
       Integer lgid = colfamToLgidMap.get(mbs);
-      if (lgid == null)
+      if (lgid == null) {
         lgid = groups.length;
+      }
       return lgid;
     }
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/Merge.java b/core/src/main/java/org/apache/accumulo/core/util/Merge.java
index 26e927d..0e81ba9 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/Merge.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/Merge.java
@@ -16,6 +16,9 @@
  */
 package org.apache.accumulo.core.util;
 
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.FILES;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.PREV_ROW;
+
 import java.util.ArrayList;
 import java.util.Iterator;
 import java.util.List;
@@ -231,7 +234,7 @@
       ClientContext context = (ClientContext) client;
       tableId = Tables.getTableId(context, tablename);
       tablets = TabletsMetadata.builder().scanMetadataTable()
-          .overRange(new KeyExtent(tableId, end, start).toMetadataRange()).fetchFiles().fetchPrev()
+          .overRange(new KeyExtent(tableId, end, start).toMetadataRange()).fetch(FILES, PREV_ROW)
           .build(context);
     } catch (Exception e) {
       throw new MergeException(e);
diff --git a/core/src/main/java/org/apache/accumulo/core/util/SimpleThreadPool.java b/core/src/main/java/org/apache/accumulo/core/util/SimpleThreadPool.java
index 91fedf0..39f8f94 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/SimpleThreadPool.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/SimpleThreadPool.java
@@ -37,4 +37,25 @@
     allowCoreThreadTimeOut(true);
   }
 
+  /**
+   * Wrap this with a trivial object whose {@link AutoCloseable#close()} method calls
+   * {@link #shutdownNow()}.
+   */
+  public CloseableSimpleThreadPool asCloseable() {
+    return new CloseableSimpleThreadPool(this);
+  }
+
+  public static class CloseableSimpleThreadPool implements AutoCloseable {
+    private final SimpleThreadPool stp;
+
+    public CloseableSimpleThreadPool(SimpleThreadPool simpleThreadPool) {
+      this.stp = simpleThreadPool;
+    }
+
+    @Override
+    public void close() {
+      stp.shutdownNow();
+    }
+  }
+
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/cleaner/CleanerUtil.java b/core/src/main/java/org/apache/accumulo/core/util/cleaner/CleanerUtil.java
new file mode 100644
index 0000000..db1602e
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/util/cleaner/CleanerUtil.java
@@ -0,0 +1,129 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.util.cleaner;
+
+import static java.util.Objects.requireNonNull;
+
+import java.lang.ref.Cleaner;
+import java.lang.ref.Cleaner.Cleanable;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.accumulo.core.client.AccumuloClient;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.MutationsRejectedException;
+import org.apache.accumulo.fate.zookeeper.ZooCache;
+import org.slf4j.Logger;
+
+/**
+ * This class collects all the cleaner actions executed in various parts of the code.
+ *
+ * <p>
+ * These actions replace the use of finalizers, which are deprecated in Java 9 and later, and should
+ * be avoided. These actions are triggered by their respective objects when those objects become
+ * phantom reachable.
+ *
+ * <p>
+ * In the "unclosed*" methods below, the object should have been closed (implements AutoCloseable).
+ * We could possibly consolidate these into a single method which only warns, and doesn't try to
+ * clean up. We could also delete them entirely, since it is the caller's responsibility to close
+ * AutoCloseable resources, not the object's own responsibility to detect that it wasn't closed.
+ */
+public class CleanerUtil {
+
+  public static final Cleaner CLEANER = Cleaner.create();
+
+  /**
+   * Register an action to warn about caller failing to close an {@link AutoCloseable} object.
+   *
+   * <p>
+   * This task will register a generic action to:
+   * <ol>
+   * <li>check that the monitored object wasn't closed,
+   * <li>log a warning that the monitored object was not closed,
+   * <li>attempt to close a resource within the object, and
+   * <li>log an error if the resource cannot be closed for any reason
+   * </ol>
+   *
+   * @param obj
+   *          the object to monitor for becoming phantom-reachable without having been closed
+   * @param objClass
+   *          the class whose simple name will be used in the log message for <code>o</code>
+   *          (usually an interface name, rather than the actual impl name of the object)
+   * @param closed
+   *          a flag to check whether <code>o</code> has already been closed
+   * @param log
+   *          the logger to use when emitting error/warn messages
+   * @param closeable
+   *          the resource within <code>o</code> to close when <code>o</code> is cleaned; must not
+   *          contain a reference to the <code>monitoredObject</code> or it won't become
+   *          phantom-reachable and will never be cleaned
+   * @return the registered {@link Cleanable} from {@link Cleaner#register(Object, Runnable)}
+   */
+  public static Cleanable unclosed(AutoCloseable obj, Class<?> objClass, AtomicBoolean closed,
+      Logger log, AutoCloseable closeable) {
+    String className = requireNonNull(objClass).getSimpleName();
+    requireNonNull(closed);
+    requireNonNull(log);
+    String closeableClassName = closeable == null ? null : closeable.getClass().getSimpleName();
+
+    // capture the stack trace during setup for logging later, so user can find unclosed object
+    var stackTrace = new Exception();
+
+    // register the action to run when obj becomes phantom-reachable or clean is explicitly called
+    return CLEANER.register(obj, () -> {
+      if (closed.get()) {
+        // already closed; nothing to do
+        return;
+      }
+      log.warn("{} found unreferenced without calling close()", className, stackTrace);
+      if (closeable != null) {
+        try {
+          closeable.close();
+        } catch (Exception e1) {
+          log.error("{} internal error; exception closing {}", objClass, closeableClassName, e1);
+        }
+      }
+    });
+  }
+
+  // this done for the BatchWriterIterator test code; I don't trust that pattern, but
+  // registering a cleaner is something any user is probably going to have to do to clean up
+  // resources used in an iterator, until iterators properly implement their own close()
+  public static Cleanable batchWriterAndClientCloser(Object o, Logger log, BatchWriter bw,
+      AccumuloClient client) {
+    requireNonNull(log);
+    requireNonNull(bw);
+    requireNonNull(client);
+    return CLEANER.register(o, () -> {
+      try {
+        bw.close();
+      } catch (MutationsRejectedException e) {
+        log.error("Failed to close BatchWriter; some mutations may not be applied", e);
+      } finally {
+        client.close();
+      }
+    });
+  }
+
+  // this is dubious; MetadataConstraints should probably use the ZooCache provided by context
+  // can be done in a follow-on action; for now, this merely replaces the previous finalizer
+  public static Cleanable zooCacheClearer(Object o, ZooCache zc) {
+    requireNonNull(zc);
+    return CLEANER.register(o, zc::clear);
+  }
+
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/FormatterFactory.java b/core/src/main/java/org/apache/accumulo/core/util/format/FormatterFactory.java
index fa1542d..184298b 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/format/FormatterFactory.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/format/FormatterFactory.java
@@ -30,7 +30,7 @@
       Iterable<Entry<Key,Value>> scanner, FormatterConfig config) {
     Formatter formatter = null;
     try {
-      formatter = formatterClass.newInstance();
+      formatter = formatterClass.getDeclaredConstructor().newInstance();
     } catch (Exception e) {
       log.warn("Unable to instantiate formatter. Using default formatter.", e);
       formatter = new DefaultFormatter();
diff --git a/core/src/main/java/org/apache/accumulo/core/util/ratelimit/SharedRateLimiterFactory.java b/core/src/main/java/org/apache/accumulo/core/util/ratelimit/SharedRateLimiterFactory.java
index 4fe8ee9..b82d551 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/ratelimit/SharedRateLimiterFactory.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/ratelimit/SharedRateLimiterFactory.java
@@ -24,8 +24,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.collect.ImmutableMap;
-
 /**
  * Provides the ability to retrieve a {@link RateLimiter} keyed to a specific string, which will
  * dynamically update its rate according to a specified callback function.
@@ -108,7 +106,7 @@
   protected void update() {
     Map<String,SharedRateLimiter> limitersCopy;
     synchronized (activeLimiters) {
-      limitersCopy = ImmutableMap.copyOf(activeLimiters);
+      limitersCopy = Map.copyOf(activeLimiters);
     }
     for (Map.Entry<String,SharedRateLimiter> entry : limitersCopy.entrySet()) {
       try {
@@ -126,7 +124,7 @@
   protected void report() {
     Map<String,SharedRateLimiter> limitersCopy;
     synchronized (activeLimiters) {
-      limitersCopy = ImmutableMap.copyOf(activeLimiters);
+      limitersCopy = Map.copyOf(activeLimiters);
     }
     for (Map.Entry<String,SharedRateLimiter> entry : limitersCopy.entrySet()) {
       try {
@@ -170,8 +168,9 @@
     public void report() {
       if (log.isDebugEnabled()) {
         long duration = System.currentTimeMillis() - lastUpdate;
-        if (duration == 0)
+        if (duration == 0) {
           return;
+        }
         lastUpdate = System.currentTimeMillis();
 
         long sum = permitsAcquired;
diff --git a/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java b/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java
index 7fff9ec..4f521d8 100644
--- a/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java
+++ b/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java
@@ -40,7 +40,6 @@
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
-import com.google.common.collect.ImmutableList;
 
 /**
  * A cache for values stored in ZooKeeper. Values are kept up to date as they change.
@@ -165,15 +164,17 @@
         case None:
           switch (event.getState()) {
             case Disconnected:
-              if (log.isTraceEnabled())
+              if (log.isTraceEnabled()) {
                 log.trace("Zoo keeper connection disconnected, clearing cache");
+              }
               clear();
               break;
             case SyncConnected:
               break;
             case Expired:
-              if (log.isTraceEnabled())
+              if (log.isTraceEnabled()) {
                 log.trace("Zoo keeper connection expired, clearing cache");
+              }
               clear();
               break;
             default:
@@ -311,7 +312,7 @@
   public List<String> getChildren(final String zPath) {
     Preconditions.checkState(!closed);
 
-    ZooRunnable<List<String>> zr = new ZooRunnable<List<String>>() {
+    ZooRunnable<List<String>> zr = new ZooRunnable<>() {
 
       @Override
       public List<String> run() throws KeeperException, InterruptedException {
@@ -332,7 +333,7 @@
 
           List<String> children = zooKeeper.getChildren(zPath, watcher);
           if (children != null) {
-            children = ImmutableList.copyOf(children);
+            children = List.copyOf(children);
           }
           childrenCache.put(zPath, children);
           immutableCache = new ImmutableCacheCopies(++updateCount, immutableCache, childrenCache);
@@ -376,7 +377,7 @@
    */
   public byte[] get(final String zPath, final ZcStat status) {
     Preconditions.checkState(!closed);
-    ZooRunnable<byte[]> zr = new ZooRunnable<byte[]>() {
+    ZooRunnable<byte[]> zr = new ZooRunnable<>() {
 
       @Override
       public byte[] run() throws KeeperException, InterruptedException {
diff --git a/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooUtil.java b/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooUtil.java
index 8d898c8..e3cc8d7 100644
--- a/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooUtil.java
+++ b/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooUtil.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.fate.zookeeper;
 
+import static java.nio.charset.StandardCharsets.UTF_8;
 import static java.util.Objects.requireNonNull;
 import static java.util.concurrent.TimeUnit.MILLISECONDS;
 
@@ -31,6 +32,7 @@
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.util.OpTimer;
 import org.apache.accumulo.core.volume.VolumeConfiguration;
 import org.apache.accumulo.fate.util.Retry;
 import org.apache.accumulo.fate.util.Retry.RetryFactory;
@@ -614,4 +616,51 @@
     }
   }
 
+  public static String getInstanceID(ZooCache zooCache, String instanceName) {
+    requireNonNull(zooCache, "zooCache cannot be null");
+    requireNonNull(instanceName, "instanceName cannot be null");
+    String instanceNamePath = Constants.ZROOT + Constants.ZINSTANCES + "/" + instanceName;
+    byte[] data = zooCache.get(instanceNamePath);
+    if (data == null) {
+      throw new RuntimeException("Instance name " + instanceName + " does not exist in zookeeper. "
+          + "Run \"accumulo org.apache.accumulo.server.util.ListInstances\" to see a list.");
+    }
+    return new String(data, UTF_8);
+  }
+
+  public static void verifyInstanceId(ZooCache zooCache, String instanceId, String instanceName) {
+    requireNonNull(zooCache, "zooCache cannot be null");
+    requireNonNull(instanceId, "instanceId cannot be null");
+    if (zooCache.get(Constants.ZROOT + "/" + instanceId) == null) {
+      throw new RuntimeException("Instance id " + instanceId
+          + (instanceName == null ? "" : " pointed to by the name " + instanceName)
+          + " does not exist in zookeeper");
+    }
+  }
+
+  public static List<String> getMasterLocations(ZooCache zooCache, String instanceId) {
+    String masterLocPath = ZooUtil.getRoot(instanceId) + Constants.ZMASTER_LOCK;
+
+    OpTimer timer = null;
+
+    if (log.isTraceEnabled()) {
+      log.trace("tid={} Looking up master location in zookeeper.", Thread.currentThread().getId());
+      timer = new OpTimer().start();
+    }
+
+    byte[] loc = ZooUtil.getLockData(zooCache, masterLocPath);
+
+    if (timer != null) {
+      timer.stop();
+      log.trace("tid={} Found master at {} in {}", Thread.currentThread().getId(),
+          (loc == null ? "null" : new String(loc, UTF_8)),
+          String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+    }
+
+    if (loc == null) {
+      return Collections.emptyList();
+    }
+
+    return Collections.singletonList(new String(loc, UTF_8));
+  }
 }
diff --git a/shell/src/test/java/org/apache/accumulo/shell/PasswordConverterTest.java b/core/src/test/java/org/apache/accumulo/core/cli/PasswordConverterTest.java
similarity index 88%
rename from shell/src/test/java/org/apache/accumulo/shell/PasswordConverterTest.java
rename to core/src/test/java/org/apache/accumulo/core/cli/PasswordConverterTest.java
index 58f065d..8a3d1c1 100644
--- a/shell/src/test/java/org/apache/accumulo/shell/PasswordConverterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/cli/PasswordConverterTest.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.shell;
+package org.apache.accumulo.core.cli;
 
 import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.junit.Assert.assertEquals;
@@ -29,7 +29,6 @@
 import java.security.SecureRandom;
 import java.util.Scanner;
 
-import org.apache.accumulo.shell.ShellOptionsJC.PasswordConverter;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.BeforeClass;
@@ -42,7 +41,7 @@
 public class PasswordConverterTest {
 
   private class Password {
-    @Parameter(names = "--password", converter = PasswordConverter.class)
+    @Parameter(names = "--password", converter = ClientOpts.PasswordConverter.class)
     String password;
   }
 
@@ -91,9 +90,9 @@
   }
 
   @Test
-  public void testFile() throws FileNotFoundException {
+  public void testFile() throws IOException {
     argv[1] = "file:pom.xml";
-    Scanner scan = new Scanner(new File("pom.xml"), UTF_8.name());
+    Scanner scan = new Scanner(new File("pom.xml"), UTF_8);
     String expected = scan.nextLine();
     scan.close();
     new JCommander(password).parse(argv);
@@ -112,4 +111,11 @@
     new JCommander(password).parse(argv);
     assertEquals("stdin", password.password);
   }
+
+  @Test
+  public void testPlainText() {
+    argv[1] = "passwordString";
+    new JCommander(password).parse(argv);
+    assertEquals("passwordString", password.password);
+  }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/cli/TestClientOpts.java b/core/src/test/java/org/apache/accumulo/core/cli/TestClientOpts.java
index 90e4ff5..39d6537 100644
--- a/core/src/test/java/org/apache/accumulo/core/cli/TestClientOpts.java
+++ b/core/src/test/java/org/apache/accumulo/core/cli/TestClientOpts.java
@@ -48,4 +48,18 @@
     assertTrue(opts.getToken() instanceof PasswordToken);
     assertEquals("myinst", props.getProperty("instance.name"));
   }
+
+  @Test
+  public void testPassword() {
+    ClientOpts opts = new ClientOpts();
+    String[] args =
+        new String[] {"--password", "mypass", "-u", "userabc", "-o", "instance.name=myinst", "-o",
+            "instance.zookeepers=zoo1,zoo2", "-o", "auth.principal=user123"};
+    opts.parseArgs("test", args);
+    Properties props = opts.getClientProps();
+    assertEquals("user123", ClientProperty.AUTH_PRINCIPAL.getValue(props));
+    assertTrue(opts.getToken() instanceof PasswordToken);
+    assertTrue(opts.getToken().equals(new PasswordToken("mypass")));
+    assertEquals("myinst", props.getProperty("instance.name"));
+  }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/rfile/RFileClientTest.java b/core/src/test/java/org/apache/accumulo/core/client/rfile/RFileClientTest.java
index 8f4b428..dec5a2c 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/rfile/RFileClientTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/rfile/RFileClientTest.java
@@ -68,8 +68,6 @@
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 
 public class RFileClientTest {
@@ -337,19 +335,19 @@
 
     Scanner scanner = RFile.newScanner().from(testFile).withFileSystem(localFs)
         .withAuthorizations(new Authorizations("A")).build();
-    assertEquals(ImmutableMap.of(k2, v2, k3, v3), toMap(scanner));
+    assertEquals(Map.of(k2, v2, k3, v3), toMap(scanner));
     assertEquals(new Authorizations("A"), scanner.getAuthorizations());
     scanner.close();
 
     scanner = RFile.newScanner().from(testFile).withFileSystem(localFs)
         .withAuthorizations(new Authorizations("A", "B")).build();
-    assertEquals(ImmutableMap.of(k1, v1, k2, v2, k3, v3), toMap(scanner));
+    assertEquals(Map.of(k1, v1, k2, v2, k3, v3), toMap(scanner));
     assertEquals(new Authorizations("A", "B"), scanner.getAuthorizations());
     scanner.close();
 
     scanner = RFile.newScanner().from(testFile).withFileSystem(localFs)
         .withAuthorizations(new Authorizations("B")).build();
-    assertEquals(ImmutableMap.of(k3, v3), toMap(scanner));
+    assertEquals(Map.of(k3, v3), toMap(scanner));
     assertEquals(new Authorizations("B"), scanner.getAuthorizations());
     scanner.close();
   }
@@ -380,7 +378,7 @@
 
     scanner =
         RFile.newScanner().from(testFile).withFileSystem(localFs).withoutSystemIterators().build();
-    assertEquals(ImmutableMap.of(k2, v2, k1, v1), toMap(scanner));
+    assertEquals(Map.of(k2, v2, k1, v1), toMap(scanner));
     scanner.setRange(new Range("r2"));
     assertFalse(scanner.iterator().hasNext());
     scanner.close();
@@ -442,11 +440,11 @@
     // pass in table config that has versioning iterator configured
     Scanner scanner = RFile.newScanner().from(testFile).withFileSystem(localFs)
         .withTableProperties(ntc.getProperties()).build();
-    assertEquals(ImmutableMap.of(k2, v2), toMap(scanner));
+    assertEquals(Map.of(k2, v2), toMap(scanner));
     scanner.close();
 
     scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).build();
-    assertEquals(ImmutableMap.of(k2, v2, k1, v1), toMap(scanner));
+    assertEquals(Map.of(k2, v2, k1, v1), toMap(scanner));
     scanner.close();
   }
 
@@ -459,7 +457,7 @@
     String testFile = createTmpTestFile();
 
     SamplerConfiguration sc = new SamplerConfiguration(RowSampler.class)
-        .setOptions(ImmutableMap.of("hasher", "murmur3_32", "modulus", "19"));
+        .setOptions(Map.of("hasher", "murmur3_32", "modulus", "19"));
 
     RFileWriter writer =
         RFile.newWriter().to(testFile).withFileSystem(localFs).withSampler(sc).build();
@@ -558,12 +556,11 @@
       CounterSummary counterSummary = new CounterSummary(summary);
       if (className.equals(FamilySummarizer.class.getName())) {
         Map<String,Long> counters = counterSummary.getCounters();
-        Map<String,Long> expected =
-            ImmutableMap.of("0000", 200L, "0001", 200L, "0002", 200L, "0003", 200L);
+        Map<String,Long> expected = Map.of("0000", 200L, "0001", 200L, "0002", 200L, "0003", 200L);
         assertEquals(expected, counters);
       } else if (className.equals(VisibilitySummarizer.class.getName())) {
         Map<String,Long> counters = counterSummary.getCounters();
-        Map<String,Long> expected = ImmutableMap.of("A&B", 400L, "A&B&C", 400L);
+        Map<String,Long> expected = Map.of("A&B", 400L, "A&B&C", 400L);
         assertEquals(expected, counters);
       } else {
         fail("Unexpected classname " + className);
@@ -593,12 +590,11 @@
       CounterSummary counterSummary = new CounterSummary(summary);
       if (className.equals(FamilySummarizer.class.getName())) {
         Map<String,Long> counters = counterSummary.getCounters();
-        Map<String,Long> expected =
-            ImmutableMap.of("0000", 400L, "0001", 400L, "0002", 400L, "0003", 400L);
+        Map<String,Long> expected = Map.of("0000", 400L, "0001", 400L, "0002", 400L, "0003", 400L);
         assertEquals(expected, counters);
       } else if (className.equals(VisibilitySummarizer.class.getName())) {
         Map<String,Long> counters = counterSummary.getCounters();
-        Map<String,Long> expected = ImmutableMap.of("A&B", 800L, "A&B&C", 800L);
+        Map<String,Long> expected = Map.of("A&B", 800L, "A&B&C", 800L);
         assertEquals(expected, counters);
       } else {
         fail("Unexpected classname " + className);
@@ -608,71 +604,71 @@
     // verify reading a subset of summaries works
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).read();
-    checkSummaries(summaries, ImmutableMap.of("A&B", 800L, "A&B&C", 800L), 0);
+    checkSummaries(summaries, Map.of("A&B", 800L, "A&B&C", 800L), 0);
 
     // the following test check boundary conditions for start row and end row
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).startRow(rowStr(99)).read();
-    checkSummaries(summaries, ImmutableMap.of("A&B", 400L, "A&B&C", 400L), 0);
+    checkSummaries(summaries, Map.of("A&B", 400L, "A&B&C", 400L), 0);
 
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).startRow(rowStr(98)).read();
-    checkSummaries(summaries, ImmutableMap.of("A&B", 800L, "A&B&C", 800L), 1);
+    checkSummaries(summaries, Map.of("A&B", 800L, "A&B&C", 800L), 1);
 
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).startRow(rowStr(0)).read();
-    checkSummaries(summaries, ImmutableMap.of("A&B", 800L, "A&B&C", 800L), 1);
+    checkSummaries(summaries, Map.of("A&B", 800L, "A&B&C", 800L), 1);
 
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).startRow("#").read();
-    checkSummaries(summaries, ImmutableMap.of("A&B", 800L, "A&B&C", 800L), 0);
+    checkSummaries(summaries, Map.of("A&B", 800L, "A&B&C", 800L), 0);
 
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).startRow(rowStr(100)).read();
-    checkSummaries(summaries, ImmutableMap.of("A&B", 400L, "A&B&C", 400L), 1);
+    checkSummaries(summaries, Map.of("A&B", 400L, "A&B&C", 400L), 1);
 
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).endRow(rowStr(99)).read();
-    checkSummaries(summaries, ImmutableMap.of("A&B", 400L, "A&B&C", 400L), 0);
+    checkSummaries(summaries, Map.of("A&B", 400L, "A&B&C", 400L), 0);
 
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).endRow(rowStr(100)).read();
-    checkSummaries(summaries, ImmutableMap.of("A&B", 800L, "A&B&C", 800L), 1);
+    checkSummaries(summaries, Map.of("A&B", 800L, "A&B&C", 800L), 1);
 
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).endRow(rowStr(199)).read();
-    checkSummaries(summaries, ImmutableMap.of("A&B", 800L, "A&B&C", 800L), 0);
+    checkSummaries(summaries, Map.of("A&B", 800L, "A&B&C", 800L), 0);
 
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).startRow(rowStr(50)).endRow(rowStr(150)).read();
-    checkSummaries(summaries, ImmutableMap.of("A&B", 800L, "A&B&C", 800L), 2);
+    checkSummaries(summaries, Map.of("A&B", 800L, "A&B&C", 800L), 2);
 
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).startRow(rowStr(120)).endRow(rowStr(150)).read();
-    checkSummaries(summaries, ImmutableMap.of("A&B", 400L, "A&B&C", 400L), 1);
+    checkSummaries(summaries, Map.of("A&B", 400L, "A&B&C", 400L), 1);
 
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).startRow(rowStr(50)).endRow(rowStr(199)).read();
-    checkSummaries(summaries, ImmutableMap.of("A&B", 800L, "A&B&C", 800L), 1);
+    checkSummaries(summaries, Map.of("A&B", 800L, "A&B&C", 800L), 1);
 
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).startRow("#").endRow(rowStr(150)).read();
-    checkSummaries(summaries, ImmutableMap.of("A&B", 800L, "A&B&C", 800L), 1);
+    checkSummaries(summaries, Map.of("A&B", 800L, "A&B&C", 800L), 1);
 
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).startRow(rowStr(199)).read();
-    checkSummaries(summaries, ImmutableMap.of(), 0);
+    checkSummaries(summaries, Map.of(), 0);
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).startRow(rowStr(200)).read();
-    checkSummaries(summaries, ImmutableMap.of(), 0);
+    checkSummaries(summaries, Map.of(), 0);
 
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).endRow("#").read();
-    checkSummaries(summaries, ImmutableMap.of(), 0);
+    checkSummaries(summaries, Map.of(), 0);
 
     summaries = RFile.summaries().from(testFile, testFile2).withFileSystem(localFs)
         .selectSummaries(sc -> sc.equals(sc1)).endRow(rowStr(0)).read();
-    checkSummaries(summaries, ImmutableMap.of("A&B", 400L, "A&B&C", 400L), 1);
+    checkSummaries(summaries, Map.of("A&B", 400L, "A&B&C", 400L), 1);
   }
 
   private void checkSummaries(Collection<Summary> summaries, Map<String,Long> expected, int extra) {
diff --git a/core/src/test/java/org/apache/accumulo/core/client/security/tokens/CredentialProviderTokenTest.java b/core/src/test/java/org/apache/accumulo/core/client/security/tokens/CredentialProviderTokenTest.java
index 5423fb2..4b3b0fd 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/security/tokens/CredentialProviderTokenTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/security/tokens/CredentialProviderTokenTest.java
@@ -20,14 +20,11 @@
 import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.fail;
 
 import java.io.File;
-import java.io.IOException;
 import java.net.URL;
 
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.Properties;
-import org.apache.accumulo.core.conf.CredentialProviderFactoryShim;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
@@ -35,8 +32,6 @@
 
 public class CredentialProviderTokenTest {
 
-  private static boolean isCredentialProviderAvailable = false;
-
   // Keystore contains: {'root.password':'password', 'bob.password':'bob'}
   private static String keystorePath;
 
@@ -44,13 +39,6 @@
       justification = "keystoreUrl location isn't provided by user input")
   @BeforeClass
   public static void setup() {
-    try {
-      Class.forName(CredentialProviderFactoryShim.HADOOP_CRED_PROVIDER_CLASS_NAME);
-      isCredentialProviderAvailable = true;
-    } catch (Exception e) {
-      isCredentialProviderAvailable = false;
-    }
-
     URL keystoreUrl = CredentialProviderTokenTest.class.getResource("/passwords.jceks");
     assertNotNull(keystoreUrl);
     keystorePath = "jceks://file/" + new File(keystoreUrl.getFile()).getAbsolutePath();
@@ -58,10 +46,6 @@
 
   @Test
   public void testPasswordsFromCredentialProvider() throws Exception {
-    if (!isCredentialProviderAvailable) {
-      return;
-    }
-
     CredentialProviderToken token = new CredentialProviderToken("root.password", keystorePath);
     assertEquals("root.password", token.getName());
     assertEquals(keystorePath, token.getCredentialProviders());
@@ -73,10 +57,6 @@
 
   @Test
   public void testEqualityAfterInit() throws Exception {
-    if (!isCredentialProviderAvailable) {
-      return;
-    }
-
     CredentialProviderToken token = new CredentialProviderToken("root.password", keystorePath);
 
     CredentialProviderToken uninitializedToken = new CredentialProviderToken();
@@ -89,25 +69,7 @@
   }
 
   @Test
-  public void testMissingClassesThrowsException() {
-    if (isCredentialProviderAvailable) {
-      return;
-    }
-
-    try {
-      new CredentialProviderToken("root.password", keystorePath);
-      fail("Should fail to create CredentialProviderToken when classes are not available");
-    } catch (IOException e) {
-      // pass
-    }
-  }
-
-  @Test
   public void cloneReturnsCorrectObject() throws Exception {
-    if (!isCredentialProviderAvailable) {
-      return;
-    }
-
     CredentialProviderToken token = new CredentialProviderToken("root.password", keystorePath);
     CredentialProviderToken clone = token.clone();
 
diff --git a/core/src/test/java/org/apache/accumulo/core/clientImpl/ClientContextTest.java b/core/src/test/java/org/apache/accumulo/core/clientImpl/ClientContextTest.java
index d76fbe6..fc33613 100644
--- a/core/src/test/java/org/apache/accumulo/core/clientImpl/ClientContextTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/clientImpl/ClientContextTest.java
@@ -27,7 +27,6 @@
 import java.util.Properties;
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.CredentialProviderFactoryShim;
 import org.apache.accumulo.core.conf.Property;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -36,7 +35,6 @@
 
 public class ClientContextTest {
 
-  private static boolean isCredentialProviderAvailable = false;
   private static final String keystoreName = "/site-cfg.jceks";
 
   // site-cfg.jceks={'ignored.property'=>'ignored', 'instance.secret'=>'mysecret',
@@ -47,20 +45,9 @@
       justification = "provided keystoreUrl path isn't user provided")
   @BeforeClass
   public static void setUpBeforeClass() {
-    try {
-      Class.forName(CredentialProviderFactoryShim.HADOOP_CRED_PROVIDER_CLASS_NAME);
-      isCredentialProviderAvailable = true;
-    } catch (Exception e) {
-      isCredentialProviderAvailable = false;
-    }
-
-    if (isCredentialProviderAvailable) {
-      URL keystoreUrl = ClientContextTest.class.getResource(keystoreName);
-
-      assertNotNull("Could not find " + keystoreName, keystoreUrl);
-
-      keystore = new File(keystoreUrl.getFile());
-    }
+    URL keystoreUrl = ClientContextTest.class.getResource(keystoreName);
+    assertNotNull("Could not find " + keystoreName, keystoreUrl);
+    keystore = new File(keystoreUrl.getFile());
   }
 
   protected String getKeyStoreUrl(File absoluteFilePath) {
@@ -69,10 +56,6 @@
 
   @Test
   public void loadSensitivePropertyFromCredentialProvider() {
-    if (!isCredentialProviderAvailable) {
-      return;
-    }
-
     String absPath = getKeyStoreUrl(keystore);
     Properties props = new Properties();
     props.setProperty(Property.GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS.getKey(), absPath);
@@ -82,9 +65,6 @@
 
   @Test
   public void defaultValueForSensitiveProperty() {
-    if (!isCredentialProviderAvailable) {
-      return;
-    }
     Properties props = new Properties();
     AccumuloConfiguration accClientConf = ClientConfConverter.toAccumuloConf(props);
     assertEquals(Property.INSTANCE_SECRET.getDefaultValue(),
@@ -93,10 +73,6 @@
 
   @Test
   public void sensitivePropertiesIncludedInProperties() {
-    if (!isCredentialProviderAvailable) {
-      return;
-    }
-
     String absPath = getKeyStoreUrl(keystore);
     Properties clientProps = new Properties();
     clientProps.setProperty(Property.GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS.getKey(), absPath);
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/AccumuloConfigurationTest.java b/core/src/test/java/org/apache/accumulo/core/conf/AccumuloConfigurationTest.java
index e71ebea..c5de2f9 100644
--- a/core/src/test/java/org/apache/accumulo/core/conf/AccumuloConfigurationTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/conf/AccumuloConfigurationTest.java
@@ -34,8 +34,6 @@
 import org.junit.Test;
 import org.junit.rules.ExpectedException;
 
-import com.google.common.collect.ImmutableMap;
-
 public class AccumuloConfigurationTest {
 
   @Rule
@@ -271,8 +269,8 @@
     Map<String,String> pmF = tc.getAllPropertiesWithPrefix(Property.VFS_CONTEXT_CLASSPATH_PROPERTY);
     assertSame(pmE, pmF);
     assertNotSame(pm5, pmE);
-    assertEquals(ImmutableMap.of(Property.VFS_CONTEXT_CLASSPATH_PROPERTY.getKey() + "ctx123",
-        "hdfs://ib/p1"), pmE);
+    assertEquals(
+        Map.of(Property.VFS_CONTEXT_CLASSPATH_PROPERTY.getKey() + "ctx123", "hdfs://ib/p1"), pmE);
 
     Map<String,String> pmG = tc.getAllPropertiesWithPrefix(Property.TABLE_ITERATOR_SCAN_PREFIX);
     Map<String,String> pmH = tc.getAllPropertiesWithPrefix(Property.TABLE_ITERATOR_SCAN_PREFIX);
@@ -363,7 +361,7 @@
     assertEquals(66, sec7.maxThreads);
     assertEquals(3, sec7.priority.getAsInt());
     assertEquals("com.foo.ScanPrioritizer", sec7.prioritizerClass.get());
-    assertEquals(ImmutableMap.of("k1", "v1", "k2", "v3"), sec7.prioritizerOpts);
+    assertEquals(Map.of("k1", "v1", "k2", "v3"), sec7.prioritizerOpts);
 
     tc.set(prefix + "hulksmash.threads", "44");
     assertEquals(66, sec7.maxThreads);
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/CredentialProviderFactoryShimTest.java b/core/src/test/java/org/apache/accumulo/core/conf/HadoopCredentialProviderTest.java
similarity index 70%
rename from core/src/test/java/org/apache/accumulo/core/conf/CredentialProviderFactoryShimTest.java
rename to core/src/test/java/org/apache/accumulo/core/conf/HadoopCredentialProviderTest.java
index b230f3e..c2c9890 100644
--- a/core/src/test/java/org/apache/accumulo/core/conf/CredentialProviderFactoryShimTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/conf/HadoopCredentialProviderTest.java
@@ -20,7 +20,6 @@
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertSame;
 
 import java.io.File;
 import java.net.URL;
@@ -34,7 +33,6 @@
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
-import org.junit.Assume;
 import org.junit.BeforeClass;
 import org.junit.Test;
 import org.slf4j.Logger;
@@ -43,11 +41,10 @@
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 
 @SuppressFBWarnings(value = "PATH_TRAVERSAL_IN", justification = "paths not set by user input")
-public class CredentialProviderFactoryShimTest {
+public class HadoopCredentialProviderTest {
 
   private static final Configuration hadoopConf = new Configuration();
-  private static final Logger log =
-      LoggerFactory.getLogger(CredentialProviderFactoryShimTest.class);
+  private static final Logger log = LoggerFactory.getLogger(HadoopCredentialProviderTest.class);
 
   private static final String populatedKeyStoreName = "/accumulo.jceks",
       emptyKeyStoreName = "/empty.jceks";
@@ -55,16 +52,9 @@
 
   @BeforeClass
   public static void checkCredentialProviderAvailable() {
-    try {
-      Class.forName(CredentialProviderFactoryShim.HADOOP_CRED_PROVIDER_CLASS_NAME);
-    } catch (Exception e) {
-      // If we can't load the credential provider class, don't run the tests
-      Assume.assumeNoException(e);
-    }
-
     URL populatedKeyStoreUrl =
-        CredentialProviderFactoryShimTest.class.getResource(populatedKeyStoreName),
-        emptyKeyStoreUrl = CredentialProviderFactoryShimTest.class.getResource(emptyKeyStoreName);
+        HadoopCredentialProviderTest.class.getResource(populatedKeyStoreName),
+        emptyKeyStoreUrl = HadoopCredentialProviderTest.class.getResource(emptyKeyStoreName);
 
     assertNotNull("Could not find " + populatedKeyStoreName, populatedKeyStoreUrl);
     assertNotNull("Could not find " + emptyKeyStoreName, emptyKeyStoreUrl);
@@ -79,22 +69,21 @@
 
   @Test(expected = NullPointerException.class)
   public void testNullConfigOnGetValue() {
-    CredentialProviderFactoryShim.getValueFromCredentialProvider(null, "alias");
+    HadoopCredentialProvider.getValue(null, "alias");
   }
 
   @Test(expected = NullPointerException.class)
   public void testNullAliasOnGetValue() {
-    CredentialProviderFactoryShim.getValueFromCredentialProvider(new Configuration(false), null);
+    HadoopCredentialProvider.getValue(new Configuration(false), null);
   }
 
   protected void checkCredentialProviders(Configuration conf, Map<String,String> expectation) {
-    List<String> keys = CredentialProviderFactoryShim.getKeys(conf);
+    List<String> keys = HadoopCredentialProvider.getKeys(conf);
     assertNotNull(keys);
 
     assertEquals(expectation.keySet(), new HashSet<>(keys));
     for (String expectedKey : keys) {
-      char[] value =
-          CredentialProviderFactoryShim.getValueFromCredentialProvider(conf, expectedKey);
+      char[] value = HadoopCredentialProvider.getValue(conf, expectedKey);
       assertNotNull(value);
       assertEquals(expectation.get(expectedKey), new String(value));
     }
@@ -104,7 +93,7 @@
   public void testExtractFromProvider() {
     String absPath = getKeyStoreUrl(populatedKeyStore);
     Configuration conf = new Configuration();
-    conf.set(CredentialProviderFactoryShim.CREDENTIAL_PROVIDER_PATH, absPath);
+    HadoopCredentialProvider.setPath(conf, absPath);
     Map<String,String> expectations = new HashMap<>();
     expectations.put("key1", "value1");
     expectations.put("key2", "value2");
@@ -116,7 +105,7 @@
   public void testEmptyKeyStoreParses() {
     String absPath = getKeyStoreUrl(emptyKeyStore);
     Configuration conf = new Configuration();
-    conf.set(CredentialProviderFactoryShim.CREDENTIAL_PROVIDER_PATH, absPath);
+    HadoopCredentialProvider.setPath(conf, absPath);
 
     checkCredentialProviders(conf, new HashMap<>());
   }
@@ -126,8 +115,7 @@
     String populatedAbsPath = getKeyStoreUrl(populatedKeyStore),
         emptyAbsPath = getKeyStoreUrl(emptyKeyStore);
     Configuration conf = new Configuration();
-    conf.set(CredentialProviderFactoryShim.CREDENTIAL_PROVIDER_PATH,
-        populatedAbsPath + "," + emptyAbsPath);
+    HadoopCredentialProvider.setPath(conf, populatedAbsPath + "," + emptyAbsPath);
     Map<String,String> expectations = new HashMap<>();
     expectations.put("key1", "value1");
     expectations.put("key2", "value2");
@@ -138,21 +126,21 @@
   @Test
   public void testNonExistentClassesDoesntFail() {
     Configuration conf = new Configuration();
-    conf.set(CredentialProviderFactoryShim.CREDENTIAL_PROVIDER_PATH, "jceks://file/foo/bar.jceks");
-    List<String> keys = CredentialProviderFactoryShim.getKeys(conf);
+    HadoopCredentialProvider.setPath(conf, "jceks://file/foo/bar.jceks");
+    List<String> keys = HadoopCredentialProvider.getKeys(conf);
     assertNotNull(keys);
     assertEquals(Collections.emptyList(), keys);
 
-    assertNull(CredentialProviderFactoryShim.getValueFromCredentialProvider(conf, "key1"));
+    assertNull(HadoopCredentialProvider.getValue(conf, "key1"));
   }
 
   @Test
   public void testConfigurationCreation() {
     final String path = "jceks://file/tmp/foo.jks";
-    final Configuration actualConf =
-        CredentialProviderFactoryShim.getConfiguration(hadoopConf, path);
+    final Configuration actualConf = hadoopConf;
+    HadoopCredentialProvider.setPath(actualConf, path);
     assertNotNull(actualConf);
-    assertEquals(path, actualConf.get(CredentialProviderFactoryShim.CREDENTIAL_PROVIDER_PATH));
+    assertEquals(path, actualConf.get("hadoop.security.credential.provider.path"));
   }
 
   @Test
@@ -167,14 +155,13 @@
 
     String providerUrl = "jceks://file" + keystoreFile.getAbsolutePath();
     Configuration conf = new Configuration();
-    conf.set(CredentialProviderFactoryShim.CREDENTIAL_PROVIDER_PATH, providerUrl);
+    HadoopCredentialProvider.setPath(conf, providerUrl);
 
     String alias = "foo";
     char[] credential = "bar".toCharArray();
-    CredentialProviderFactoryShim.createEntry(conf, alias, credential);
+    HadoopCredentialProvider.createEntry(conf, alias, credential);
 
-    assertArrayEquals(credential,
-        CredentialProviderFactoryShim.getValueFromCredentialProvider(conf, alias));
+    assertArrayEquals(credential, HadoopCredentialProvider.getValue(conf, alias));
   }
 
   @Test
@@ -197,8 +184,8 @@
       // Put the populated keystore in hdfs
       dfs.copyFromLocalFile(new Path(populatedKeyStore.toURI()), destPath);
 
-      Configuration cpConf = CredentialProviderFactoryShim.getConfiguration(dfsConfiguration,
-          "jceks://hdfs/accumulo.jceks");
+      Configuration cpConf = dfsConfiguration;
+      HadoopCredentialProvider.setPath(cpConf, "jceks://hdfs/accumulo.jceks");
 
       // The values in the keystore
       Map<String,String> expectations = new HashMap<>();
@@ -211,14 +198,4 @@
     }
   }
 
-  @Test
-  public void existingConfigurationReturned() {
-    Configuration conf = new Configuration(false);
-    conf.set("foo", "bar");
-    Configuration conf2 =
-        CredentialProviderFactoryShim.getConfiguration(conf, "jceks:///file/accumulo.jceks");
-    // Same object
-    assertSame(conf, conf2);
-    assertEquals("bar", conf.get("foo"));
-  }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/ObservableConfigurationTest.java b/core/src/test/java/org/apache/accumulo/core/conf/ObservableConfigurationTest.java
deleted file mode 100644
index e6c7169..0000000
--- a/core/src/test/java/org/apache/accumulo/core/conf/ObservableConfigurationTest.java
+++ /dev/null
@@ -1,99 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.conf;
-
-import static org.easymock.EasyMock.createMock;
-import static org.easymock.EasyMock.replay;
-import static org.easymock.EasyMock.verify;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-
-import java.util.Collection;
-import java.util.Map;
-import java.util.function.Predicate;
-
-import org.junit.Before;
-import org.junit.Test;
-
-public class ObservableConfigurationTest {
-  private static class TestObservableConfig extends ObservableConfiguration {
-    @Override
-    public String get(Property property) {
-      return null;
-    }
-
-    @Override
-    public void getProperties(Map<String,String> props, Predicate<String> filter) {}
-  }
-
-  private ObservableConfiguration c;
-  private ConfigurationObserver co1;
-
-  @Before
-  public void setUp() {
-    c = new TestObservableConfig();
-    co1 = createMock(ConfigurationObserver.class);
-  }
-
-  @Test
-  public void testAddAndRemove() {
-    ConfigurationObserver co2 = createMock(ConfigurationObserver.class);
-    c.addObserver(co1);
-    c.addObserver(co2);
-    Collection<ConfigurationObserver> cos = c.getObservers();
-    assertEquals(2, cos.size());
-    assertTrue(cos.contains(co1));
-    assertTrue(cos.contains(co2));
-    c.removeObserver(co1);
-    cos = c.getObservers();
-    assertEquals(1, cos.size());
-    assertTrue(cos.contains(co2));
-  }
-
-  @Test(expected = NullPointerException.class)
-  public void testNoNullAdd() {
-    c.addObserver(null);
-  }
-
-  @Test
-  public void testSessionExpired() {
-    c.addObserver(co1);
-    co1.sessionExpired();
-    replay(co1);
-    c.expireAllObservers();
-    verify(co1);
-  }
-
-  @Test
-  public void testPropertyChanged() {
-    String key = "key";
-    c.addObserver(co1);
-    co1.propertyChanged(key);
-    replay(co1);
-    c.propertyChanged(key);
-    verify(co1);
-  }
-
-  @Test
-  public void testPropertiesChanged() {
-    c.addObserver(co1);
-    co1.propertiesChanged();
-    replay(co1);
-    c.propertiesChanged();
-    verify(co1);
-  }
-}
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/SiteConfigurationTest.java b/core/src/test/java/org/apache/accumulo/core/conf/SiteConfigurationTest.java
index ada7c55..8efac67 100644
--- a/core/src/test/java/org/apache/accumulo/core/conf/SiteConfigurationTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/conf/SiteConfigurationTest.java
@@ -25,43 +25,26 @@
 import java.util.HashMap;
 import java.util.Map;
 
-import org.junit.BeforeClass;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 
 public class SiteConfigurationTest {
-  private static boolean isCredentialProviderAvailable;
-
-  @BeforeClass
-  public static void checkCredentialProviderAvailable() {
-    try {
-      Class.forName(CredentialProviderFactoryShim.HADOOP_CRED_PROVIDER_CLASS_NAME);
-      isCredentialProviderAvailable = true;
-    } catch (Exception e) {
-      isCredentialProviderAvailable = false;
-    }
-  }
 
   @SuppressFBWarnings(value = "PATH_TRAVERSAL_IN",
       justification = "path to keystore not provided by user input")
   @Test
   public void testOnlySensitivePropertiesExtractedFromCredentialProvider()
       throws SecurityException {
-    if (!isCredentialProviderAvailable) {
-      return;
-    }
-
     // site-cfg.jceks={'ignored.property'=>'ignored', 'instance.secret'=>'mysecret',
     // 'general.rpc.timeout'=>'timeout'}
     URL keystore = SiteConfigurationTest.class.getResource("/site-cfg.jceks");
     assertNotNull(keystore);
     String credProvPath = "jceks://file" + new File(keystore.getFile()).getAbsolutePath();
 
-    SiteConfiguration config = new SiteConfiguration(ImmutableMap
-        .of(Property.GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS.getKey(), credProvPath));
+    var overrides =
+        Map.of(Property.GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS.getKey(), credProvPath);
+    var config = new SiteConfiguration.Builder().noFile().withOverrides(overrides).build();
 
     assertEquals("mysecret", config.get(Property.INSTANCE_SECRET));
     assertNull(config.get("ignored.property"));
@@ -71,7 +54,7 @@
 
   @Test
   public void testDefault() {
-    SiteConfiguration conf = new SiteConfiguration();
+    var conf = SiteConfiguration.auto();
     assertEquals("localhost:2181", conf.get(Property.INSTANCE_ZK_HOST));
     assertEquals("DEFAULT", conf.get(Property.INSTANCE_SECRET));
     assertEquals("", conf.get(Property.INSTANCE_VOLUMES));
@@ -84,7 +67,7 @@
   @Test
   public void testFile() {
     URL propsUrl = getClass().getClassLoader().getResource("accumulo2.properties");
-    SiteConfiguration conf = new SiteConfiguration(propsUrl);
+    var conf = new SiteConfiguration.Builder().fromUrl(propsUrl).build();
     assertEquals("myhost123:2181", conf.get(Property.INSTANCE_ZK_HOST));
     assertEquals("mysecret", conf.get(Property.INSTANCE_SECRET));
     assertEquals("hdfs://localhost:8020/accumulo123", conf.get(Property.INSTANCE_VOLUMES));
@@ -96,14 +79,14 @@
 
   @Test
   public void testConfigOverrides() {
-    SiteConfiguration conf = new SiteConfiguration();
+    var conf = SiteConfiguration.auto();
     assertEquals("localhost:2181", conf.get(Property.INSTANCE_ZK_HOST));
 
-    conf = new SiteConfiguration((URL) null,
-        ImmutableMap.of(Property.INSTANCE_ZK_HOST.getKey(), "myhost:2181"));
+    conf = new SiteConfiguration.Builder().noFile()
+        .withOverrides(Map.of(Property.INSTANCE_ZK_HOST.getKey(), "myhost:2181")).build();
     assertEquals("myhost:2181", conf.get(Property.INSTANCE_ZK_HOST));
 
-    Map<String,String> results = new HashMap<>();
+    var results = new HashMap<String,String>();
     conf.getProperties(results, p -> p.startsWith("instance"));
     assertEquals("myhost:2181", results.get(Property.INSTANCE_ZK_HOST.getKey()));
   }
diff --git a/core/src/test/java/org/apache/accumulo/core/crypto/CryptoTest.java b/core/src/test/java/org/apache/accumulo/core/crypto/CryptoTest.java
index 376fd86..91dbe98 100644
--- a/core/src/test/java/org/apache/accumulo/core/crypto/CryptoTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/crypto/CryptoTest.java
@@ -265,8 +265,7 @@
   }
 
   @Test
-  public void testMissingConfigProperties()
-      throws ClassNotFoundException, InstantiationException, IllegalAccessException {
+  public void testMissingConfigProperties() throws ReflectiveOperationException {
     ConfigurationCopy aconf = new ConfigurationCopy(DefaultConfiguration.getInstance());
     Configuration conf = new Configuration(false);
     for (Map.Entry<String,String> e : conf) {
@@ -277,7 +276,7 @@
     String configuredClass = aconf.get(Property.INSTANCE_CRYPTO_SERVICE.getKey());
     Class<? extends CryptoService> clazz =
         AccumuloVFSClassLoader.loadClass(configuredClass, CryptoService.class);
-    CryptoService cs = clazz.newInstance();
+    CryptoService cs = clazz.getDeclaredConstructor().newInstance();
 
     exception.expect(NullPointerException.class);
     cs.init(aconf.getAllPropertiesWithPrefix(Property.TABLE_PREFIX));
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/CombinerTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/CombinerTest.java
index 7fc6bee..7e61d62 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/CombinerTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/CombinerTest.java
@@ -694,9 +694,8 @@
   }
 
   public static void sumArray(Class<? extends Encoder<List<Long>>> encoderClass,
-      SummingArrayCombiner.Type type)
-      throws IOException, InstantiationException, IllegalAccessException {
-    Encoder<List<Long>> encoder = encoderClass.newInstance();
+      SummingArrayCombiner.Type type) throws IOException, ReflectiveOperationException {
+    Encoder<List<Long>> encoder = encoderClass.getDeclaredConstructor().newInstance();
 
     TreeMap<Key,Value> tm1 = new TreeMap<>();
 
@@ -789,7 +788,7 @@
   }
 
   @Test
-  public void sumArrayTest() throws IOException, InstantiationException, IllegalAccessException {
+  public void sumArrayTest() throws IOException, ReflectiveOperationException {
     sumArray(SummingArrayCombiner.VarLongArrayEncoder.class, SummingArrayCombiner.Type.VARLEN);
     sumArray(SummingArrayCombiner.FixedLongArrayEncoder.class, SummingArrayCombiner.Type.FIXEDLEN);
     sumArray(SummingArrayCombiner.StringArrayEncoder.class, SummingArrayCombiner.Type.STRING);
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/RowFilterTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/RowFilterTest.java
index 105f7c4..f08bdbb 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/RowFilterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/RowFilterTest.java
@@ -42,8 +42,6 @@
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableSet;
-
 public class RowFilterTest {
 
   public static class SummingRowFilter extends RowFilter {
@@ -192,7 +190,7 @@
 
     ByteSequence cf = new ArrayByteSequence("cf2");
 
-    filter.seek(new Range(), ImmutableSet.of(cf), true);
+    filter.seek(new Range(), Set.of(cf), true);
     assertEquals(new HashSet<>(Arrays.asList("1", "3", "0", "4")), getRows(filter));
 
     filter.seek(new Range("0", "4"), Collections.emptySet(), false);
@@ -204,7 +202,7 @@
     filter.seek(new Range("4"), Collections.emptySet(), false);
     assertEquals(new HashSet<String>(), getRows(filter));
 
-    filter.seek(new Range("4"), ImmutableSet.of(cf), true);
+    filter.seek(new Range("4"), Set.of(cf), true);
     assertEquals(new HashSet<>(Arrays.asList("4")), getRows(filter));
 
   }
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/TestCfCqSlice.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/TestCfCqSlice.java
index 0f8360c..ea46580 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/TestCfCqSlice.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/TestCfCqSlice.java
@@ -288,7 +288,8 @@
     firstOpts.put(CfCqSliceOpts.OPT_MAX_CF, new String(LONG_LEX.encode(sliceMaxCf), UTF_8));
     secondOpts.put(CfCqSliceOpts.OPT_MIN_CQ, new String(LONG_LEX.encode(sliceMinCq), UTF_8));
     secondOpts.put(CfCqSliceOpts.OPT_MAX_CQ, new String(LONG_LEX.encode(sliceMaxCq), UTF_8));
-    SortedKeyValueIterator<Key,Value> skvi = getFilterClass().newInstance();
+    SortedKeyValueIterator<Key,Value> skvi =
+        getFilterClass().getDeclaredConstructor().newInstance();
     skvi.init(new SortedMapIterator(data), firstOpts, null);
     loadKvs(skvi.deepCopy(null), foundKvs, secondOpts, INFINITY);
     for (int i = 0; i < LR_DIM; i++) {
@@ -373,7 +374,8 @@
   private void loadKvs(SortedKeyValueIterator<Key,Value> parent, boolean[][][] foundKvs,
       Map<String,String> options, Range range) {
     try {
-      SortedKeyValueIterator<Key,Value> skvi = getFilterClass().newInstance();
+      SortedKeyValueIterator<Key,Value> skvi =
+          getFilterClass().getDeclaredConstructor().newInstance();
       skvi.init(parent, options, null);
       skvi.seek(range, EMPTY_CF_SET, false);
 
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/TransformingIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/TransformingIteratorTest.java
index 6954fd7..3b5b951 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/TransformingIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/TransformingIteratorTest.java
@@ -55,13 +55,11 @@
 import org.junit.Before;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-
 public class TransformingIteratorTest {
 
   private static Authorizations authorizations =
       new Authorizations("vis0", "vis1", "vis2", "vis3", "vis4");
-  private static final Map<String,String> EMPTY_OPTS = ImmutableMap.of();
+  private static final Map<String,String> EMPTY_OPTS = Map.of();
   private TransformingIterator titer;
 
   private TreeMap<Key,Value> data = new TreeMap<>();
@@ -88,8 +86,8 @@
     ReuseIterator reuserIter = new ReuseIterator();
     reuserIter.init(visFilter, EMPTY_OPTS, null);
     try {
-      titer = clazz.newInstance();
-    } catch (InstantiationException | IllegalAccessException e) {
+      titer = clazz.getDeclaredConstructor().newInstance();
+    } catch (ReflectiveOperationException e) {
       throw new RuntimeException(e);
     }
 
@@ -104,7 +102,7 @@
           new Authorizations("vis0", "vis1", "vis2", "vis3"));
       opts = cfg.getOptions();
     } else {
-      opts = ImmutableMap.of();
+      opts = Map.of();
     }
     titer.init(reuserIter, opts, iterEnv);
   }
@@ -143,7 +141,7 @@
       setUpTransformIterator(clazz);
 
       // All rows with visibilities reversed
-      TransformingIterator iter = clazz.newInstance();
+      TransformingIterator iter = clazz.getDeclaredConstructor().newInstance();
       TreeMap<Key,Value> expected = new TreeMap<>();
       for (int row = 1; row <= 3; ++row) {
         for (int cf = 1; cf <= 3; ++cf) {
@@ -229,9 +227,11 @@
     // be inside the range.
 
     TreeMap<Key,Value> expected = new TreeMap<>();
-    for (int cq = 1; cq <= 3; ++cq)
-      for (int cv = 1; cv <= 3; ++cv)
+    for (int cq = 1; cq <= 3; ++cq) {
+      for (int cv = 1; cv <= 3; ++cv) {
         putExpected(expected, 1, 3, cq, cv, PartialKey.ROW);
+      }
+    }
     checkExpected(new Range(new Key("row1", "cf0"), true, new Key("row1", "cf1"), false), expected);
   }
 
@@ -276,10 +276,13 @@
     setUpTransformIterator(ColFamReversingKeyTransformingIterator.class);
 
     TreeMap<Key,Value> expected = new TreeMap<>();
-    for (int row = 1; row <= 3; ++row)
-      for (int cq = 1; cq <= 3; ++cq)
-        for (int cv = 1; cv <= 3; ++cv)
+    for (int row = 1; row <= 3; ++row) {
+      for (int cq = 1; cq <= 3; ++cq) {
+        for (int cv = 1; cv <= 3; ++cv) {
           putExpected(expected, row, expectedCF, cq, cv, PartialKey.ROW);
+        }
+      }
+    }
     checkExpected(expected, "cf2");
   }
 
@@ -328,10 +331,13 @@
     setUpTransformIterator(ColFamReversingCompactionKeyTransformingIterator.class);
 
     TreeMap<Key,Value> expected = new TreeMap<>();
-    for (int row = 1; row <= 3; ++row)
-      for (int cq = 1; cq <= 3; ++cq)
-        for (int cv = 1; cv <= 3; ++cv)
+    for (int row = 1; row <= 3; ++row) {
+      for (int cq = 1; cq <= 3; ++cq) {
+        for (int cv = 1; cv <= 3; ++cv) {
           putExpected(expected, row, expectedCF, cq, cv, PartialKey.ROW);
+        }
+      }
+    }
     checkExpected(expected, "cf2");
   }
 
@@ -604,8 +610,9 @@
     protected Collection<ByteSequence>
         untransformColumnFamilies(Collection<ByteSequence> columnFamilies) {
       HashSet<ByteSequence> untransformed = new HashSet<>();
-      for (ByteSequence cf : columnFamilies)
+      for (ByteSequence cf : columnFamilies) {
         untransformed.add(untransformColumnFamily(cf));
+      }
       return untransformed;
     }
 
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/WholeRowIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/WholeRowIteratorTest.java
index 6485b75..7f3e421 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/WholeRowIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/WholeRowIteratorTest.java
@@ -38,15 +38,13 @@
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableList;
-
 public class WholeRowIteratorTest {
 
   @Test(expected = IOException.class)
   public void testBadDecodeRow() throws IOException {
     Key k = new Key(new Text("r1"), new Text("cf1234567890"));
     Value v = new Value("v1".getBytes());
-    Value encoded = WholeRowIterator.encodeRow(ImmutableList.of(k), ImmutableList.of(v));
+    Value encoded = WholeRowIterator.encodeRow(List.of(k), List.of(v));
     encoded.set(Arrays.copyOfRange(encoded.get(), 0, 10)); // truncate to 10 bytes only
     WholeRowIterator.decodeRow(k, encoded);
   }
diff --git a/server/gc/src/test/java/org/apache/accumulo/gc/SimpleGarbageCollectorOptsTest.java b/core/src/test/java/org/apache/accumulo/core/metadata/schema/DeleteMetadataTest.java
similarity index 62%
rename from server/gc/src/test/java/org/apache/accumulo/gc/SimpleGarbageCollectorOptsTest.java
rename to core/src/test/java/org/apache/accumulo/core/metadata/schema/DeleteMetadataTest.java
index cec83ca..4890ab4 100644
--- a/server/gc/src/test/java/org/apache/accumulo/gc/SimpleGarbageCollectorOptsTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/metadata/schema/DeleteMetadataTest.java
@@ -14,25 +14,22 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.gc;
+package org.apache.accumulo.core.metadata.schema;
 
-import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertEquals;
 
-import org.apache.accumulo.gc.SimpleGarbageCollector.GCOpts;
-import org.junit.Before;
 import org.junit.Test;
 
-public class SimpleGarbageCollectorOptsTest {
-  private GCOpts opts;
-
-  @Before
-  public void setUp() {
-    opts = new GCOpts();
-  }
+public class DeleteMetadataTest {
 
   @Test
-  public void testIt() {
-    assertFalse(opts.verbose);
-    assertFalse(opts.safeMode);
+  public void encodeRowTest() {
+    String path = "/dir/testpath";
+    assertEquals(path,
+        MetadataSchema.DeletesSection.decodeRow(MetadataSchema.DeletesSection.encodeRow(path)));
+    path = "hdfs://localhost:8020/dir/r+/1_table/f$%#";
+    assertEquals(path,
+        MetadataSchema.DeletesSection.decodeRow(MetadataSchema.DeletesSection.encodeRow(path)));
+
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/metadata/schema/MetadataTimeTest.java b/core/src/test/java/org/apache/accumulo/core/metadata/schema/MetadataTimeTest.java
new file mode 100644
index 0000000..f71aaf2
--- /dev/null
+++ b/core/src/test/java/org/apache/accumulo/core/metadata/schema/MetadataTimeTest.java
@@ -0,0 +1,132 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.metadata.schema;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+
+import org.apache.accumulo.core.client.admin.TimeType;
+import org.junit.Test;
+
+public class MetadataTimeTest {
+
+  private static final MetadataTime m1234 = new MetadataTime(1234, TimeType.MILLIS);
+  private static final MetadataTime m5678 = new MetadataTime(5678, TimeType.MILLIS);
+  private static final MetadataTime l1234 = new MetadataTime(1234, TimeType.LOGICAL);
+  private static final MetadataTime l5678 = new MetadataTime(5678, TimeType.LOGICAL);
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testGetInstance_InvalidType() {
+    MetadataTime.parse("X1234");
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testGetInstance_Logical_ParseFailure() {
+    MetadataTime.parse("LABCD");
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testGetInstance_Millis_ParseFailure() {
+    MetadataTime.parse("MABCD");
+  }
+
+  @Test
+  public void testGetInstance_Millis() {
+    assertEquals(1234, m1234.getTime());
+    assertEquals(TimeType.MILLIS, m1234.getType());
+  }
+
+  @Test
+  public void testGetInstance_Logical() {
+    assertEquals(1234, l1234.getTime());
+    assertEquals(TimeType.LOGICAL, l1234.getType());
+
+  }
+
+  @Test
+  public void testEquality() {
+    assertEquals(m1234, new MetadataTime(1234, TimeType.MILLIS));
+    assertNotEquals(m1234, l1234);
+    assertNotEquals(l1234, l5678);
+  }
+
+  @Test
+  public void testValueOfM() {
+    assertEquals(TimeType.MILLIS, MetadataTime.getType('M'));
+  }
+
+  @Test
+  public void testValueOfL() {
+    assertEquals(TimeType.LOGICAL, MetadataTime.getType('L'));
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testValueOfOtherChar() {
+    MetadataTime.getType('x');
+  }
+
+  @Test
+  public void testgetCodeforTimeType() {
+    assertEquals('M', MetadataTime.getCode(TimeType.MILLIS));
+    assertEquals('L', MetadataTime.getCode(TimeType.LOGICAL));
+  }
+
+  @Test
+  public void testgetCodeforMillis() {
+    assertEquals('M', m1234.getCode());
+  }
+
+  @Test
+  public void testgetCodeforLogical() {
+    assertEquals('L', l1234.getCode());
+  }
+
+  @Test
+  public void testenCode() {
+    assertEquals("M21", new MetadataTime(21, TimeType.MILLIS).encode());
+    assertEquals("L45678", new MetadataTime(45678, TimeType.LOGICAL).encode());
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testCompareTypesDiffer1() {
+    m1234.compareTo(l1234);
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testCompareTypesDiffer2() {
+    l1234.compareTo(m1234);
+  }
+
+  @Test
+  public void testCompareSame() {
+    assertTrue(m1234.compareTo(m1234) == 0);
+    assertTrue(l1234.compareTo(l1234) == 0);
+  }
+
+  @Test
+  public void testCompare1() {
+    assertTrue(m1234.compareTo(m5678) < 0);
+    assertTrue(l1234.compareTo(l5678) < 0);
+  }
+
+  @Test
+  public void testCompare2() {
+    assertTrue(m5678.compareTo(m1234) > 0);
+    assertTrue(l5678.compareTo(l1234) > 0);
+  }
+}
diff --git a/core/src/test/java/org/apache/accumulo/core/metadata/schema/SortSkewTest.java b/core/src/test/java/org/apache/accumulo/core/metadata/schema/SortSkewTest.java
new file mode 100644
index 0000000..d71163d
--- /dev/null
+++ b/core/src/test/java/org/apache/accumulo/core/metadata/schema/SortSkewTest.java
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.metadata.schema;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+
+import org.junit.Test;
+
+public class SortSkewTest {
+  private static final String shortpath = "1";
+  private static final String longpath =
+      "/verylongpath/12345679xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxiiiiiiiiiiiiiiiiii/zzzzzzzzzzzzzzzzzzzzz"
+          + "aaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbccccccccccccccccccccccccccxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyzzzzzzzzzzzzzzzz";;
+  // these are values previously generated from SortSkew.getCode() for the above
+  private static final String shortcode = "9416ac93";
+  private static final String longcode = "b9ddf266";
+
+  @Test
+  public void verifyCodeSize() {
+    int expectedLength = SortSkew.SORTSKEW_LENGTH;
+    assertEquals(expectedLength, SortSkew.getCode(shortpath).length());
+    assertEquals(expectedLength, SortSkew.getCode(longpath).length());
+  }
+
+  @Test
+  public void verifySame() {
+    assertEquals(SortSkew.getCode("123"), SortSkew.getCode("123"));
+    assertNotEquals(SortSkew.getCode("123"), SortSkew.getCode("321"));
+  }
+
+  @Test
+  public void verifyStable() {
+    assertEquals(shortcode, SortSkew.getCode(shortpath));
+    assertEquals(longcode, SortSkew.getCode(longpath));
+  }
+
+}
diff --git a/core/src/test/java/org/apache/accumulo/core/metadata/schema/TabletMetadataTest.java b/core/src/test/java/org/apache/accumulo/core/metadata/schema/TabletMetadataTest.java
index 9232f0c..5d94876 100644
--- a/core/src/test/java/org/apache/accumulo/core/metadata/schema/TabletMetadataTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/metadata/schema/TabletMetadataTest.java
@@ -27,6 +27,8 @@
 import static org.junit.Assert.assertTrue;
 
 import java.util.EnumSet;
+import java.util.Map;
+import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
 
@@ -42,16 +44,14 @@
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.FutureLocationColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LastLocationColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ScanFileColumnFamily;
-import org.apache.accumulo.core.metadata.schema.TabletMetadata.FetchedColumns;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType;
 import org.apache.accumulo.core.metadata.schema.TabletMetadata.LocationType;
 import org.apache.accumulo.core.tabletserver.log.LogEntry;
 import org.apache.accumulo.core.util.HostAndPort;
+import org.apache.accumulo.fate.FateTxId;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableSet;
-
 public class TabletMetadataTest {
 
   @Test
@@ -65,8 +65,8 @@
     FLUSH_COLUMN.put(mutation, new Value("6"));
     TIME_COLUMN.put(mutation, new Value("M123456789"));
 
-    mutation.at().family(BulkFileColumnFamily.NAME).qualifier("bf1").put("");
-    mutation.at().family(BulkFileColumnFamily.NAME).qualifier("bf2").put("");
+    mutation.at().family(BulkFileColumnFamily.NAME).qualifier("bf1").put(FateTxId.formatTid(56));
+    mutation.at().family(BulkFileColumnFamily.NAME).qualifier("bf2").put(FateTxId.formatTid(59));
 
     mutation.at().family(ClonedColumnFamily.NAME).qualifier("").put("OK");
 
@@ -92,18 +92,18 @@
     SortedMap<Key,Value> rowMap = toRowMap(mutation);
 
     TabletMetadata tm = TabletMetadata.convertRow(rowMap.entrySet().iterator(),
-        EnumSet.allOf(FetchedColumns.class), true);
+        EnumSet.allOf(ColumnType.class), true);
 
     assertEquals("OK", tm.getCloned());
     assertEquals(5L, tm.getCompactId().getAsLong());
     assertEquals("/a/t/6/a/", tm.getDir());
     assertEquals(extent.getEndRow(), tm.getEndRow());
     assertEquals(extent, tm.getExtent());
-    assertEquals(ImmutableSet.of("df1", "df2"), ImmutableSet.copyOf(tm.getFiles()));
-    assertEquals(ImmutableMap.of("df1", dfv1, "df2", dfv2), tm.getFilesMap());
+    assertEquals(Set.of("df1", "df2"), Set.copyOf(tm.getFiles()));
+    assertEquals(Map.of("df1", dfv1, "df2", dfv2), tm.getFilesMap());
     assertEquals(6L, tm.getFlushId().getAsLong());
     assertEquals(rowMap, tm.getKeyValues());
-    assertEquals(ImmutableSet.of("bf1", "bf2"), ImmutableSet.copyOf(tm.getLoaded()));
+    assertEquals(Map.of("bf1", 56L, "bf2", 59L), tm.getLoaded());
     assertEquals(HostAndPort.fromParts("server1", 8555), tm.getLocation().getHostAndPort());
     assertEquals("s001", tm.getLocation().getSession());
     assertEquals(LocationType.CURRENT, tm.getLocation().getType());
@@ -111,14 +111,13 @@
     assertEquals(HostAndPort.fromParts("server2", 8555), tm.getLast().getHostAndPort());
     assertEquals("s000", tm.getLast().getSession());
     assertEquals(LocationType.LAST, tm.getLast().getType());
-    assertEquals(
-        ImmutableSet.of(le1.getName() + " " + le1.timestamp, le2.getName() + " " + le2.timestamp),
+    assertEquals(Set.of(le1.getName() + " " + le1.timestamp, le2.getName() + " " + le2.timestamp),
         tm.getLogs().stream().map(le -> le.getName() + " " + le.timestamp).collect(toSet()));
     assertEquals(extent.getPrevEndRow(), tm.getPrevEndRow());
     assertEquals(extent.getTableId(), tm.getTableId());
     assertTrue(tm.sawPrevEndRow());
-    assertEquals("M123456789", tm.getTime());
-    assertEquals(ImmutableSet.of("sf1", "sf2"), ImmutableSet.copyOf(tm.getScans()));
+    assertEquals("M123456789", tm.getTime().encode());
+    assertEquals(Set.of("sf1", "sf2"), Set.copyOf(tm.getScans()));
   }
 
   @Test
@@ -131,7 +130,7 @@
     SortedMap<Key,Value> rowMap = toRowMap(mutation);
 
     TabletMetadata tm = TabletMetadata.convertRow(rowMap.entrySet().iterator(),
-        EnumSet.allOf(FetchedColumns.class), false);
+        EnumSet.allOf(ColumnType.class), false);
 
     assertEquals(extent, tm.getExtent());
     assertEquals(HostAndPort.fromParts("server1", 8555), tm.getLocation().getHostAndPort());
@@ -150,8 +149,7 @@
 
     SortedMap<Key,Value> rowMap = toRowMap(mutation);
 
-    TabletMetadata.convertRow(rowMap.entrySet().iterator(), EnumSet.allOf(FetchedColumns.class),
-        false);
+    TabletMetadata.convertRow(rowMap.entrySet().iterator(), EnumSet.allOf(ColumnType.class), false);
   }
 
   private SortedMap<Key,Value> toRowMap(Mutation mutation) {
diff --git a/core/src/test/java/org/apache/accumulo/core/spi/scan/HintScanPrioritizerTest.java b/core/src/test/java/org/apache/accumulo/core/spi/scan/HintScanPrioritizerTest.java
index 48da852..2b8a1ce 100644
--- a/core/src/test/java/org/apache/accumulo/core/spi/scan/HintScanPrioritizerTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/spi/scan/HintScanPrioritizerTest.java
@@ -28,8 +28,6 @@
 import org.apache.accumulo.core.spi.scan.ScanInfo.Type;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-
 public class HintScanPrioritizerTest {
   @Test
   public void testSort() {
@@ -57,8 +55,8 @@
 
           @Override
           public Map<String,String> getOptions() {
-            return ImmutableMap.of("priority.isbn", "10", "priority.background", "30",
-                "default_priority", "20");
+            return Map.of("priority.isbn", "10", "priority.background", "30", "default_priority",
+                "20");
           }
 
           @Override
diff --git a/core/src/test/java/org/apache/accumulo/core/spi/scan/SimpleScanDispatcherTest.java b/core/src/test/java/org/apache/accumulo/core/spi/scan/SimpleScanDispatcherTest.java
index 432e6dd..441ffd0 100644
--- a/core/src/test/java/org/apache/accumulo/core/spi/scan/SimpleScanDispatcherTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/spi/scan/SimpleScanDispatcherTest.java
@@ -30,8 +30,6 @@
 import org.apache.accumulo.core.spi.scan.ScanInfo.Type;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-
 public class SimpleScanDispatcherTest {
   @Test
   public void testProps() {
@@ -113,22 +111,21 @@
     String dname = SimpleScanDispatcher.DEFAULT_SCAN_EXECUTOR_NAME;
 
     runTest(Collections.emptyMap(), dname, dname);
-    runTest(ImmutableMap.of("executor", "E1"), "E1", "E1");
-    runTest(ImmutableMap.of("single_executor", "E2"), "E2", dname);
-    runTest(ImmutableMap.of("multi_executor", "E3"), dname, "E3");
-    runTest(ImmutableMap.of("executor", "E1", "single_executor", "E2"), "E2", "E1");
-    runTest(ImmutableMap.of("executor", "E1", "multi_executor", "E3"), "E1", "E3");
-    runTest(ImmutableMap.of("single_executor", "E2", "multi_executor", "E3"), "E2", "E3");
-    runTest(ImmutableMap.of("executor", "E1", "single_executor", "E2", "multi_executor", "E3"),
-        "E2", "E3");
+    runTest(Map.of("executor", "E1"), "E1", "E1");
+    runTest(Map.of("single_executor", "E2"), "E2", dname);
+    runTest(Map.of("multi_executor", "E3"), dname, "E3");
+    runTest(Map.of("executor", "E1", "single_executor", "E2"), "E2", "E1");
+    runTest(Map.of("executor", "E1", "multi_executor", "E3"), "E1", "E3");
+    runTest(Map.of("single_executor", "E2", "multi_executor", "E3"), "E2", "E3");
+    runTest(Map.of("executor", "E1", "single_executor", "E2", "multi_executor", "E3"), "E2", "E3");
   }
 
   @Test
   public void testHints() {
-    runTest(ImmutableMap.of("executor", "E1"), ImmutableMap.of("scan_type", "quick"), "E1", "E1");
-    runTest(ImmutableMap.of("executor", "E1", "executor.quick", "E2"),
-        ImmutableMap.of("scan_type", "quick"), "E2", "E2");
-    runTest(ImmutableMap.of("executor", "E1", "executor.quick", "E2", "executor.slow", "E3"),
-        ImmutableMap.of("scan_type", "slow"), "E3", "E3");
+    runTest(Map.of("executor", "E1"), Map.of("scan_type", "quick"), "E1", "E1");
+    runTest(Map.of("executor", "E1", "executor.quick", "E2"), Map.of("scan_type", "quick"), "E2",
+        "E2");
+    runTest(Map.of("executor", "E1", "executor.quick", "E2", "executor.slow", "E3"),
+        Map.of("scan_type", "slow"), "E3", "E3");
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/spi/scan/TestScanInfo.java b/core/src/test/java/org/apache/accumulo/core/spi/scan/TestScanInfo.java
index 4e14099..68f9d05 100644
--- a/core/src/test/java/org/apache/accumulo/core/spi/scan/TestScanInfo.java
+++ b/core/src/test/java/org/apache/accumulo/core/spi/scan/TestScanInfo.java
@@ -29,8 +29,6 @@
 import org.apache.accumulo.core.spi.common.Stats;
 import org.apache.accumulo.core.util.Stat;
 
-import com.google.common.collect.ImmutableMap;
-
 public class TestScanInfo implements ScanInfo {
 
   String testId;
@@ -59,7 +57,7 @@
   }
 
   TestScanInfo setExecutionHints(String k, String v) {
-    this.executionHints = ImmutableMap.of(k, v);
+    this.executionHints = Map.of(k, v);
     return this;
   }
 
diff --git a/hadoop-mapreduce/pom.xml b/hadoop-mapreduce/pom.xml
index ac469a9..462c68c 100644
--- a/hadoop-mapreduce/pom.xml
+++ b/hadoop-mapreduce/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-hadoop-mapreduce</artifactId>
   <name>Apache Accumulo Hadoop MapReduce</name>
diff --git a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapreduce/partition/RangePartitioner.java b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapreduce/partition/RangePartitioner.java
index f0ec055..caf15df 100644
--- a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapreduce/partition/RangePartitioner.java
+++ b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapreduce/partition/RangePartitioner.java
@@ -86,7 +86,7 @@
       try (
           InputStream inputStream =
               DistributedCacheHelper.openCachedFile(cutFileName, CUTFILE_KEY, conf);
-          Scanner in = new Scanner(inputStream, UTF_8.name())) {
+          Scanner in = new Scanner(inputStream, UTF_8)) {
         while (in.hasNextLine()) {
           cutPoints.add(new Text(Base64.getDecoder().decode(in.nextLine())));
         }
diff --git a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/InputFormatBuilderImpl.java b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/InputFormatBuilderImpl.java
index dfd249c..46bba34 100644
--- a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/InputFormatBuilderImpl.java
+++ b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/InputFormatBuilderImpl.java
@@ -39,9 +39,6 @@
 import org.apache.hadoop.mapred.JobConf;
 import org.apache.hadoop.mapreduce.Job;
 
-import com.google.common.collect.ImmutableList;
-import com.google.common.collect.ImmutableMap;
-
 public class InputFormatBuilderImpl<T>
     implements InputFormatBuilder, InputFormatBuilder.ClientParams<T>,
     InputFormatBuilder.TableParams<T>, InputFormatBuilder.InputFormatOptions<T> {
@@ -73,8 +70,9 @@
   @Override
   public InputFormatBuilder.InputFormatOptions<T> table(String tableName) {
     this.currentTable = Objects.requireNonNull(tableName, "Table name must not be null");
-    if (tableConfigMap.isEmpty())
+    if (tableConfigMap.isEmpty()) {
       tableConfigMap = new LinkedHashMap<>();
+    }
     tableConfigMap.put(currentTable, new InputTableConfig());
     return this;
   }
@@ -95,9 +93,10 @@
   @Override
   public InputFormatBuilder.InputFormatOptions<T> ranges(Collection<Range> ranges) {
     List<Range> newRanges =
-        ImmutableList.copyOf(Objects.requireNonNull(ranges, "Collection of ranges is null"));
-    if (newRanges.size() == 0)
+        List.copyOf(Objects.requireNonNull(ranges, "Collection of ranges is null"));
+    if (newRanges.size() == 0) {
       throw new IllegalArgumentException("Specified collection of ranges is empty.");
+    }
     tableConfigMap.get(currentTable).setRanges(newRanges);
     return this;
   }
@@ -105,10 +104,11 @@
   @Override
   public InputFormatBuilder.InputFormatOptions<T>
       fetchColumns(Collection<IteratorSetting.Column> fetchColumns) {
-    Collection<IteratorSetting.Column> newFetchColumns = ImmutableList
-        .copyOf(Objects.requireNonNull(fetchColumns, "Collection of fetch columns is null"));
-    if (newFetchColumns.size() == 0)
+    Collection<IteratorSetting.Column> newFetchColumns =
+        List.copyOf(Objects.requireNonNull(fetchColumns, "Collection of fetch columns is null"));
+    if (newFetchColumns.size() == 0) {
       throw new IllegalArgumentException("Specified collection of fetch columns is empty.");
+    }
     tableConfigMap.get(currentTable).fetchColumns(newFetchColumns);
     return this;
   }
@@ -123,10 +123,11 @@
 
   @Override
   public InputFormatBuilder.InputFormatOptions<T> executionHints(Map<String,String> hints) {
-    Map<String,String> newHints = ImmutableMap
-        .copyOf(Objects.requireNonNull(hints, "Map of execution hints must not be null."));
-    if (newHints.size() == 0)
+    Map<String,String> newHints =
+        Map.copyOf(Objects.requireNonNull(hints, "Map of execution hints must not be null."));
+    if (newHints.size() == 0) {
       throw new IllegalArgumentException("Specified map of execution hints is empty.");
+    }
     tableConfigMap.get(currentTable).setExecutionHints(newHints);
     return this;
   }
@@ -165,8 +166,9 @@
   @Override
   public InputFormatOptions<T> batchScan(boolean value) {
     tableConfigMap.get(currentTable).setUseBatchScan(value);
-    if (value)
+    if (value) {
       tableConfigMap.get(currentTable).setAutoAdjustRanges(true);
+    }
     return this;
   }
 
@@ -207,19 +209,25 @@
       }
       InputConfigurator.setScanAuthorizations(callingClass, conf, config.getScanAuths().get());
       // all optional values
-      if (config.getContext().isPresent())
+      if (config.getContext().isPresent()) {
         InputConfigurator.setClassLoaderContext(callingClass, conf, config.getContext().get());
-      if (config.getRanges().size() > 0)
+      }
+      if (config.getRanges().size() > 0) {
         InputConfigurator.setRanges(callingClass, conf, config.getRanges());
-      if (config.getIterators().size() > 0)
+      }
+      if (config.getIterators().size() > 0) {
         InputConfigurator.writeIteratorsToConf(callingClass, conf, config.getIterators());
-      if (config.getFetchedColumns().size() > 0)
+      }
+      if (config.getFetchedColumns().size() > 0) {
         InputConfigurator.fetchColumns(callingClass, conf, config.getFetchedColumns());
-      if (config.getSamplerConfiguration() != null)
+      }
+      if (config.getSamplerConfiguration() != null) {
         InputConfigurator.setSamplerConfiguration(callingClass, conf,
             config.getSamplerConfiguration());
-      if (config.getExecutionHints().size() > 0)
+      }
+      if (config.getExecutionHints().size() > 0) {
         InputConfigurator.setExecutionHints(callingClass, conf, config.getExecutionHints());
+      }
       InputConfigurator.setAutoAdjustRanges(callingClass, conf, config.shouldAutoAdjustRanges());
       InputConfigurator.setScanIsolation(callingClass, conf, config.shouldUseIsolatedScanners());
       InputConfigurator.setLocalIterators(callingClass, conf, config.shouldUseLocalIterators());
diff --git a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/lib/ConfiguratorBase.java b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/lib/ConfiguratorBase.java
index ad6ad58..92461b0 100644
--- a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/lib/ConfiguratorBase.java
+++ b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/lib/ConfiguratorBase.java
@@ -109,7 +109,7 @@
           cachedClientPropsFileName(implementingClass), conf)) {
 
         StringBuilder sb = new StringBuilder();
-        try (Scanner scanner = new Scanner(inputStream, UTF_8.name())) {
+        try (Scanner scanner = new Scanner(inputStream, UTF_8)) {
           while (scanner.hasNextLine()) {
             sb.append(scanner.nextLine() + "\n");
           }
diff --git a/hadoop-mapreduce/src/test/java/org/apache/accumulo/hadoopImpl/mapred/RangeInputSplitTest.java b/hadoop-mapreduce/src/test/java/org/apache/accumulo/hadoopImpl/mapred/RangeInputSplitTest.java
index 266c8a3..ee46e5d 100644
--- a/hadoop-mapreduce/src/test/java/org/apache/accumulo/hadoopImpl/mapred/RangeInputSplitTest.java
+++ b/hadoop-mapreduce/src/test/java/org/apache/accumulo/hadoopImpl/mapred/RangeInputSplitTest.java
@@ -28,6 +28,7 @@
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.HashSet;
+import java.util.Map;
 import java.util.Set;
 
 import org.apache.accumulo.core.client.IteratorSetting;
@@ -38,8 +39,6 @@
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-
 public class RangeInputSplitTest {
 
   @Test
@@ -86,7 +85,7 @@
     split.setUsesLocalIterators(true);
     split.setFetchedColumns(fetchedColumns);
     split.setIterators(iterators);
-    split.setExecutionHints(ImmutableMap.of("priority", "9"));
+    split.setExecutionHints(Map.of("priority", "9"));
 
     ByteArrayOutputStream baos = new ByteArrayOutputStream();
     DataOutputStream dos = new DataOutputStream(baos);
diff --git a/hadoop-mapreduce/src/test/java/org/apache/accumulo/hadoopImpl/mapreduce/InputTableConfigTest.java b/hadoop-mapreduce/src/test/java/org/apache/accumulo/hadoopImpl/mapreduce/InputTableConfigTest.java
index 74ab698..fb5f125 100644
--- a/hadoop-mapreduce/src/test/java/org/apache/accumulo/hadoopImpl/mapreduce/InputTableConfigTest.java
+++ b/hadoop-mapreduce/src/test/java/org/apache/accumulo/hadoopImpl/mapreduce/InputTableConfigTest.java
@@ -26,6 +26,7 @@
 import java.util.ArrayList;
 import java.util.HashSet;
 import java.util.List;
+import java.util.Map;
 import java.util.Set;
 
 import org.apache.accumulo.core.client.IteratorSetting;
@@ -35,8 +36,6 @@
 import org.junit.Before;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-
 public class InputTableConfigTest {
 
   private InputTableConfig tableQueryConfig;
@@ -114,9 +113,9 @@
 
   @Test
   public void testExecutionHints() throws IOException {
-    tableQueryConfig.setExecutionHints(ImmutableMap.of("priority", "9"));
+    tableQueryConfig.setExecutionHints(Map.of("priority", "9"));
     InputTableConfig actualConfig = deserialize(serialize(tableQueryConfig));
-    assertEquals(ImmutableMap.of("priority", "9"), actualConfig.getExecutionHints());
+    assertEquals(Map.of("priority", "9"), actualConfig.getExecutionHints());
   }
 
   private byte[] serialize(InputTableConfig tableQueryConfig) throws IOException {
diff --git a/hadoop-mapreduce/src/test/java/org/apache/accumulo/hadoopImpl/mapreduce/RangeInputSplitTest.java b/hadoop-mapreduce/src/test/java/org/apache/accumulo/hadoopImpl/mapreduce/RangeInputSplitTest.java
index f9913ad..86b15e4 100644
--- a/hadoop-mapreduce/src/test/java/org/apache/accumulo/hadoopImpl/mapreduce/RangeInputSplitTest.java
+++ b/hadoop-mapreduce/src/test/java/org/apache/accumulo/hadoopImpl/mapreduce/RangeInputSplitTest.java
@@ -28,6 +28,7 @@
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.HashSet;
+import java.util.Map;
 import java.util.Set;
 
 import org.apache.accumulo.core.client.IteratorSetting;
@@ -38,8 +39,6 @@
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-
 public class RangeInputSplitTest {
 
   @Test
@@ -89,7 +88,7 @@
     split.setUsesLocalIterators(true);
     split.setFetchedColumns(fetchedColumns);
     split.setIterators(iterators);
-    split.setExecutionHints(ImmutableMap.of("priority", "9"));
+    split.setExecutionHints(Map.of("priority", "9"));
 
     ByteArrayOutputStream baos = new ByteArrayOutputStream();
     DataOutputStream dos = new DataOutputStream(baos);
diff --git a/iterator-test-harness/pom.xml b/iterator-test-harness/pom.xml
index 1009652..8380f49 100644
--- a/iterator-test-harness/pom.xml
+++ b/iterator-test-harness/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-iterator-test-harness</artifactId>
   <name>Apache Accumulo Iterator Test Harness</name>
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestCaseFinder.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestCaseFinder.java
index 537197a..3ef3717 100644
--- a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestCaseFinder.java
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestCaseFinder.java
@@ -20,12 +20,12 @@
 import java.lang.reflect.Modifier;
 import java.util.ArrayList;
 import java.util.List;
+import java.util.Set;
 
 import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.collect.ImmutableSet;
 import com.google.common.reflect.ClassPath;
 import com.google.common.reflect.ClassPath.ClassInfo;
 
@@ -48,8 +48,7 @@
     } catch (IOException e) {
       throw new RuntimeException(e);
     }
-    ImmutableSet<ClassInfo> classes =
-        cp.getTopLevelClasses(IteratorTestCase.class.getPackage().getName());
+    Set<ClassInfo> classes = cp.getTopLevelClasses(IteratorTestCase.class.getPackage().getName());
 
     final List<IteratorTestCase> testCases = new ArrayList<>();
     // final Set<Class<? extends IteratorTestCase>> classes =
@@ -70,8 +69,8 @@
       }
 
       try {
-        testCases.add((IteratorTestCase) clz.newInstance());
-      } catch (IllegalAccessException | InstantiationException e) {
+        testCases.add((IteratorTestCase) clz.getDeclaredConstructor().newInstance());
+      } catch (ReflectiveOperationException e) {
         log.warn("Could not instantiate {}", clz, e);
       }
     }
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestUtil.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestUtil.java
index 571b676..464bb66 100644
--- a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestUtil.java
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestUtil.java
@@ -32,8 +32,8 @@
 
   public static SortedKeyValueIterator<Key,Value> instantiateIterator(IteratorTestInput input) {
     try {
-      return requireNonNull(input.getIteratorClass()).newInstance();
-    } catch (InstantiationException | IllegalAccessException e) {
+      return requireNonNull(input.getIteratorClass()).getDeclaredConstructor().newInstance();
+    } catch (ReflectiveOperationException e) {
       throw new RuntimeException(e);
     }
   }
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/InstantiationTestCase.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/InstantiationTestCase.java
index e9341d5..81534a3 100644
--- a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/InstantiationTestCase.java
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/InstantiationTestCase.java
@@ -35,7 +35,7 @@
     try {
       // We should be able to instantiate the Iterator given the Class
       @SuppressWarnings("unused")
-      SortedKeyValueIterator<Key,Value> iter = clz.newInstance();
+      SortedKeyValueIterator<Key,Value> iter = clz.getDeclaredConstructor().newInstance();
     } catch (Exception e) {
       return new IteratorTestOutput(e);
     }
diff --git a/minicluster/pom.xml b/minicluster/pom.xml
index 43b85c4..fae0c6a 100644
--- a/minicluster/pom.xml
+++ b/minicluster/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-minicluster</artifactId>
   <name>Apache Accumulo MiniCluster</name>
@@ -80,10 +80,6 @@
       <artifactId>commons-configuration2</artifactId>
     </dependency>
     <dependency>
-      <groupId>org.apache.commons</groupId>
-      <artifactId>commons-lang3</artifactId>
-    </dependency>
-    <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-client-api</artifactId>
     </dependency>
diff --git a/minicluster/src/main/java/org/apache/accumulo/cluster/RemoteShell.java b/minicluster/src/main/java/org/apache/accumulo/cluster/RemoteShell.java
index f159fec..2a41a83 100644
--- a/minicluster/src/main/java/org/apache/accumulo/cluster/RemoteShell.java
+++ b/minicluster/src/main/java/org/apache/accumulo/cluster/RemoteShell.java
@@ -22,7 +22,6 @@
 import java.io.IOException;
 import java.util.Map;
 
-import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.util.Shell.ShellCommandExecutor;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -79,7 +78,7 @@
       hostWithUser = options.getUserName() + "@" + hostWithUser;
     }
 
-    String remoteCmd = StringUtils.join(super.getExecString(), ' ');
+    String remoteCmd = String.join(" ", super.getExecString());
 
     String cmd = String.format("%1$s %2$s %3$s \"%4$s\"", options.getSshCommand(),
         options.getSshOptions(), hostWithUser, remoteCmd);
diff --git a/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneAccumuloCluster.java b/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneAccumuloCluster.java
index 90b5853..e4bfca4 100644
--- a/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneAccumuloCluster.java
+++ b/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneAccumuloCluster.java
@@ -74,7 +74,8 @@
     this.tmp = tmp;
     this.users = users;
     this.serverAccumuloConfDir = serverAccumuloConfDir;
-    siteConfig = new SiteConfiguration(new File(serverAccumuloConfDir, "accumulo.properties"));
+    siteConfig =
+        SiteConfiguration.fromFile(new File(serverAccumuloConfDir, "accumulo.properties")).build();
   }
 
   public String getAccumuloHome() {
diff --git a/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java b/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java
index 7d9a6d7..33b5a4d 100644
--- a/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java
+++ b/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java
@@ -35,7 +35,6 @@
 import org.apache.accumulo.master.state.SetGoalState;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.server.util.Admin;
-import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.util.Shell.ExitCodeException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -114,7 +113,7 @@
     for (String arg : args) {
       cmd.add("'" + arg + "'");
     }
-    log.info("Running: '{}' on {}", sanitize(StringUtils.join(cmd, " ")), sanitize(master));
+    log.info("Running: '{}' on {}", sanitize(String.join(" ", cmd)), sanitize(master));
     return exec(master, cmd.toArray(new String[cmd.size()]));
   }
 
diff --git a/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloInstance.java b/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloInstance.java
index eeacdb1..43f5eec 100644
--- a/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloInstance.java
+++ b/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloInstance.java
@@ -41,10 +41,11 @@
 
   // Keep this private to avoid bringing it into the public API
   private static String getZooKeepersFromDir(File directory) {
-    if (!directory.isDirectory())
+    if (!directory.isDirectory()) {
       throw new IllegalArgumentException("Not a directory " + directory.getPath());
+    }
     File configFile = new File(new File(directory, "conf"), "accumulo.properties");
-    SiteConfiguration conf = new SiteConfiguration(configFile);
+    var conf = SiteConfiguration.fromFile(configFile).build();
     return conf.get(Property.INSTANCE_ZK_HOST);
   }
 }
diff --git a/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java b/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java
index e968564..a996588 100644
--- a/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java
+++ b/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java
@@ -146,8 +146,9 @@
   public ProcessInfo exec(Class<?> clazz, List<String> jvmArgs, String... args) throws IOException {
     ArrayList<String> jvmArgs2 = new ArrayList<>(1 + (jvmArgs == null ? 0 : jvmArgs.size()));
     jvmArgs2.add("-Xmx" + config.getDefaultMemory());
-    if (jvmArgs != null)
+    if (jvmArgs != null) {
       jvmArgs2.addAll(jvmArgs);
+    }
     return _exec(clazz, jvmArgs2, args);
   }
 
@@ -155,9 +156,10 @@
     StringBuilder classpathBuilder = new StringBuilder();
     classpathBuilder.append(config.getConfDir().getAbsolutePath());
 
-    if (config.getHadoopConfDir() != null)
+    if (config.getHadoopConfDir() != null) {
       classpathBuilder.append(File.pathSeparator)
           .append(config.getHadoopConfDir().getAbsolutePath());
+    }
 
     if (config.getClasspathItems() == null) {
       String javaClassPath = System.getProperty("java.class.path");
@@ -166,8 +168,9 @@
       }
       classpathBuilder.append(File.pathSeparator).append(javaClassPath);
     } else {
-      for (String s : config.getClasspathItems())
+      for (String s : config.getClasspathItems()) {
         classpathBuilder.append(File.pathSeparator).append(s);
+      }
     }
 
     return classpathBuilder.toString();
@@ -236,14 +239,17 @@
 
     // if we're running under accumulo.start, we forward these env vars
     String env = System.getenv("HADOOP_HOME");
-    if (env != null)
+    if (env != null) {
       builder.environment().put("HADOOP_HOME", env);
+    }
     env = System.getenv("ZOOKEEPER_HOME");
-    if (env != null)
+    if (env != null) {
       builder.environment().put("ZOOKEEPER_HOME", env);
+    }
     builder.environment().put("ACCUMULO_CONF_DIR", config.getConfDir().getAbsolutePath());
-    if (config.getHadoopConfDir() != null)
+    if (config.getHadoopConfDir() != null) {
       builder.environment().put("HADOOP_CONF_DIR", config.getHadoopConfDir().getAbsolutePath());
+    }
 
     log.debug("Starting MiniAccumuloCluster process with class: " + clazz.getSimpleName()
         + "\n, jvmOpts: " + extraJvmOpts + "\n, classpath: " + classpath + "\n, args: " + argList
@@ -311,8 +317,9 @@
     mkdirs(config.getLibExtDir());
 
     if (!config.useExistingInstance()) {
-      if (!config.useExistingZooKeepers())
+      if (!config.useExistingZooKeepers()) {
         mkdirs(config.getZooKeeperDir());
+      }
       mkdirs(config.getAccumuloDir());
     }
 
@@ -333,10 +340,11 @@
       conf.set("dfs.datanode.data.dir.perm", MiniDFSUtil.computeDatanodeDirectoryPermission());
       String oldTestBuildData = System.setProperty("test.build.data", dfs.getAbsolutePath());
       miniDFS = new MiniDFSCluster.Builder(conf).build();
-      if (oldTestBuildData == null)
+      if (oldTestBuildData == null) {
         System.clearProperty("test.build.data");
-      else
+      } else {
         System.setProperty("test.build.data", oldTestBuildData);
+      }
       miniDFS.waitClusterUp();
       InetSocketAddress dfsAddress = miniDFS.getNameNode().getNameNodeAddress();
       dfsUri = "hdfs://" + dfsAddress.getHostName() + ":" + dfsAddress.getPort();
@@ -376,7 +384,7 @@
 
     File siteFile = new File(config.getConfDir(), "accumulo.properties");
     writeConfigProperties(siteFile, config.getSiteConfig());
-    siteConfig = new SiteConfiguration(siteFile);
+    siteConfig = SiteConfiguration.fromFile(siteFile).build();
 
     if (!config.useExistingInstance() && !config.useExistingZooKeepers()) {
       zooCfgFile = new File(config.getConfDir(), "zoo.cfg");
@@ -423,8 +431,9 @@
   private void writeConfigProperties(File file, Map<String,String> settings) throws IOException {
     FileWriter fileWriter = new FileWriter(file);
 
-    for (Entry<String,String> entry : settings.entrySet())
+    for (Entry<String,String> entry : settings.entrySet()) {
       fileWriter.append(entry.getKey() + "=" + entry.getValue() + "\n");
+    }
     fileWriter.close();
   }
 
@@ -474,13 +483,15 @@
       } catch (KeeperException e) {
         throw new RuntimeException("Unable to read instance name from zookeeper.", e);
       }
-      if (instanceName == null)
+      if (instanceName == null) {
         throw new RuntimeException("Unable to read instance name from zookeeper.");
+      }
 
       config.setInstanceName(instanceName);
-      if (!AccumuloStatus.isAccumuloOffline(zrw, rootPath))
+      if (!AccumuloStatus.isAccumuloOffline(zrw, rootPath)) {
         throw new RuntimeException(
             "The Accumulo instance being used is already running. Aborting.");
+      }
     } else {
       if (!initialized) {
         Runtime.getRuntime().addShutdownHook(new Thread(() -> {
@@ -494,8 +505,9 @@
         }));
       }
 
-      if (!config.useExistingZooKeepers())
+      if (!config.useExistingZooKeepers()) {
         control.start(ServerType.ZOOKEEPER);
+      }
 
       if (!initialized) {
         if (!config.useExistingZooKeepers()) {
@@ -510,8 +522,9 @@
               s.getOutputStream().flush();
               byte[] buffer = new byte[100];
               int n = s.getInputStream().read(buffer);
-              if (n >= 4 && new String(buffer, 0, 4).equals("imok"))
+              if (n >= 4 && new String(buffer, 0, 4).equals("imok")) {
                 break;
+              }
             } catch (Exception e) {
               if (System.currentTimeMillis() - startTime >= config.getZooKeeperStartupTime()) {
                 throw new ZooKeeperBindException("Zookeeper did not start within "
@@ -521,8 +534,9 @@
               // Don't spin absurdly fast
               sleepUninterruptibly(250, TimeUnit.MILLISECONDS);
             } finally {
-              if (s != null)
+              if (s != null) {
                 s.close();
+              }
             }
           }
         }
@@ -561,8 +575,9 @@
     for (int i = 0; i < 5; i++) {
       ret = exec(Main.class, SetGoalState.class.getName(), MasterGoalState.NORMAL.toString())
           .getProcess().waitFor();
-      if (ret == 0)
+      if (ret == 0) {
         break;
+      }
       sleepUninterruptibly(1, TimeUnit.SECONDS);
     }
     if (ret != 0) {
@@ -670,8 +685,9 @@
       executor = null;
     }
 
-    if (config.useMiniDFS() && miniDFS != null)
+    if (config.useMiniDFS() && miniDFS != null) {
       miniDFS.shutdown();
+    }
     for (Process p : cleanup) {
       p.destroy();
       p.waitFor();
diff --git a/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloConfigImpl.java b/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloConfigImpl.java
index aa2872e..db9a203 100644
--- a/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloConfigImpl.java
+++ b/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloConfigImpl.java
@@ -26,7 +26,7 @@
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.ClientProperty;
-import org.apache.accumulo.core.conf.CredentialProviderFactoryShim;
+import org.apache.accumulo.core.conf.HadoopCredentialProvider;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.conf.SiteConfiguration;
 import org.apache.accumulo.minicluster.MemoryUnit;
@@ -199,15 +199,10 @@
       return;
     }
 
-    if (!CredentialProviderFactoryShim.isHadoopCredentialProviderAvailable()) {
-      throw new RuntimeException("Cannot use CredentialProvider when"
-          + " implementation is not available. Be sure to use >=Hadoop-2.6.0");
-    }
-
     File keystoreFile = new File(getConfDir(), "credential-provider.jks");
     String keystoreUri = "jceks://file" + keystoreFile.getAbsolutePath();
-    Configuration conf =
-        CredentialProviderFactoryShim.getConfiguration(getHadoopConfiguration(), keystoreUri);
+    Configuration conf = getHadoopConfiguration();
+    HadoopCredentialProvider.setPath(conf, keystoreUri);
 
     // Set the URI on the siteCfg
     siteConfig.put(Property.GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS.getKey(), keystoreUri);
@@ -223,8 +218,7 @@
 
       // Add the @Sensitive Property to the CredentialProvider
       try {
-        CredentialProviderFactoryShim.createEntry(conf, entry.getKey(),
-            entry.getValue().toCharArray());
+        HadoopCredentialProvider.createEntry(conf, entry.getKey(), entry.getValue().toCharArray());
       } catch (IOException e) {
         log.warn("Attempted to add " + entry.getKey() + " to CredentialProvider but failed", e);
         continue;
@@ -725,7 +719,7 @@
     System.setProperty("accumulo.properties", "accumulo.properties");
     this.hadoopConfDir = hadoopConfDir;
     hadoopConf = new Configuration(false);
-    accumuloConf = new SiteConfiguration(accumuloProps);
+    accumuloConf = SiteConfiguration.fromFile(accumuloProps).build();
     File coreSite = new File(hadoopConfDir, "core-site.xml");
     File hdfsSite = new File(hadoopConfDir, "hdfs-site.xml");
 
diff --git a/pom.xml b/pom.xml
index d0ea431..b65ff1b 100644
--- a/pom.xml
+++ b/pom.xml
@@ -24,7 +24,7 @@
   </parent>
   <groupId>org.apache.accumulo</groupId>
   <artifactId>accumulo-project</artifactId>
-  <version>2.0.1-SNAPSHOT</version>
+  <version>2.1.0-SNAPSHOT</version>
   <packaging>pom</packaging>
   <name>Apache Accumulo Project</name>
   <description>Apache Accumulo is a sorted, distributed key/value store based
@@ -132,9 +132,9 @@
     <jaxb.version>2.3.0.1</jaxb.version>
     <jersey.version>2.28</jersey.version>
     <jetty.version>9.4.19.v20190610</jetty.version>
-    <maven.compiler.release>8</maven.compiler.release>
-    <maven.compiler.source>1.8</maven.compiler.source>
-    <maven.compiler.target>1.8</maven.compiler.target>
+    <maven.compiler.release>11</maven.compiler.release>
+    <maven.compiler.source>11</maven.compiler.source>
+    <maven.compiler.target>11</maven.compiler.target>
     <!-- surefire/failsafe plugin option -->
     <maven.test.redirectTestOutputToFile>true</maven.test.redirectTestOutputToFile>
     <powermock.version>2.0.2</powermock.version>
@@ -1013,6 +1013,14 @@
       <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-dependency-plugin</artifactId>
+        <dependencies>
+          <dependency>
+            <!-- needed for Java 11 until maven-dependency-plugin 3.1.2 is released -->
+            <groupId>org.apache.maven.shared</groupId>
+            <artifactId>maven-dependency-analyzer</artifactId>
+            <version>1.11.1</version>
+          </dependency>
+        </dependencies>
         <executions>
           <execution>
             <id>analyze</id>
diff --git a/server/base/pom.xml b/server/base/pom.xml
index ded935d..c7b544d 100644
--- a/server/base/pom.xml
+++ b/server/base/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-server-base</artifactId>
@@ -37,6 +37,10 @@
       <optional>true</optional>
     </dependency>
     <dependency>
+      <groupId>com.google.code.gson</groupId>
+      <artifactId>gson</artifactId>
+    </dependency>
+    <dependency>
       <groupId>com.google.guava</groupId>
       <artifactId>guava</artifactId>
     </dependency>
diff --git a/server/base/src/main/java/org/apache/accumulo/server/AbstractServer.java b/server/base/src/main/java/org/apache/accumulo/server/AbstractServer.java
index 0eb7a61..9029459 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/AbstractServer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/AbstractServer.java
@@ -21,7 +21,6 @@
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.SiteConfiguration;
 import org.apache.accumulo.core.trace.TraceUtil;
 import org.apache.accumulo.server.metrics.Metrics;
 import org.apache.accumulo.server.security.SecurityUtil;
@@ -42,7 +41,7 @@
     this.applicationName = appName;
     this.hostname = Objects.requireNonNull(opts.getAddress());
     opts.parseArgs(appName, args);
-    SiteConfiguration siteConfig = opts.getSiteConfiguration();
+    var siteConfig = opts.getSiteConfiguration();
     context = new ServerContext(siteConfig);
     SecurityUtil.serverLogin(siteConfig);
     log.info("Version " + Constants.VERSION);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java b/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
index 62a4821..ed4dacc 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
@@ -34,7 +34,6 @@
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 
-import com.google.common.collect.ImmutableSet;
 import com.google.common.collect.Sets;
 
 public class ServerConstants {
@@ -44,10 +43,17 @@
   public static final String INSTANCE_ID_DIR = "instance_id";
 
   /**
+   * version (10) reflects changes to how root tablet metadata is serialized in zookeeper starting
+   * with 2.1
+   */
+  public static final int ROOT_TABLET_META_CHANGES = 10;
+
+  /**
    * version (9) reflects changes to crypto that resulted in RFiles and WALs being serialized
    * differently in version 2.0.0. Also RFiles in 2.0.0 may have summary data.
    */
   public static final int CRYPTO_CHANGES = 9;
+
   /**
    * version (8) reflects changes to RFile index (ACCUMULO-1124) AND the change to WAL tracking in
    * ZK in version 1.8.0
@@ -66,11 +72,11 @@
    *
    *
    */
-  public static final int DATA_VERSION = CRYPTO_CHANGES;
+  public static final int DATA_VERSION = ROOT_TABLET_META_CHANGES;
 
-  public static final Set<Integer> CAN_RUN = ImmutableSet.of(SHORTEN_RFILE_KEYS, DATA_VERSION);
-  public static final Set<Integer> NEEDS_UPGRADE =
-      Sets.difference(CAN_RUN, ImmutableSet.of(DATA_VERSION));
+  public static final Set<Integer> CAN_RUN =
+      Set.of(SHORTEN_RFILE_KEYS, CRYPTO_CHANGES, DATA_VERSION);
+  public static final Set<Integer> NEEDS_UPGRADE = Sets.difference(CAN_RUN, Set.of(DATA_VERSION));
 
   private static String[] baseUris = null;
 
@@ -109,10 +115,11 @@
         currentVersion =
             ServerUtil.getAccumuloPersistentVersion(vpath.getFileSystem(hadoopConf), vpath);
       } catch (Exception e) {
-        if (ignore)
+        if (ignore) {
           continue;
-        else
+        } else {
           throw new IllegalArgumentException("Accumulo volume " + path + " not initialized", e);
+        }
       }
 
       if (firstIid == null) {
@@ -169,8 +176,9 @@
 
       replacements = replacements.trim();
 
-      if (replacements.isEmpty())
+      if (replacements.isEmpty()) {
         return Collections.emptyList();
+      }
 
       String[] pairs = replacements.split(",");
       List<Pair<Path,Path>> ret = new ArrayList<>();
@@ -178,17 +186,19 @@
       for (String pair : pairs) {
 
         String[] uris = pair.split("\\s+");
-        if (uris.length != 2)
+        if (uris.length != 2) {
           throw new IllegalArgumentException(
               Property.INSTANCE_VOLUMES_REPLACEMENTS.getKey() + " contains malformed pair " + pair);
+        }
 
         Path p1, p2;
         try {
           // URI constructor handles hex escaping
           p1 = new Path(new URI(VolumeUtil.removeTrailingSlash(uris[0].trim())));
-          if (p1.toUri().getScheme() == null)
+          if (p1.toUri().getScheme() == null) {
             throw new IllegalArgumentException(Property.INSTANCE_VOLUMES_REPLACEMENTS.getKey()
                 + " contains " + uris[0] + " which is not fully qualified");
+          }
         } catch (URISyntaxException e) {
           throw new IllegalArgumentException(Property.INSTANCE_VOLUMES_REPLACEMENTS.getKey()
               + " contains " + uris[0] + " which has a syntax error", e);
@@ -196,9 +206,10 @@
 
         try {
           p2 = new Path(new URI(VolumeUtil.removeTrailingSlash(uris[1].trim())));
-          if (p2.toUri().getScheme() == null)
+          if (p2.toUri().getScheme() == null) {
             throw new IllegalArgumentException(Property.INSTANCE_VOLUMES_REPLACEMENTS.getKey()
                 + " contains " + uris[1] + " which is not fully qualified");
+          }
         } catch (URISyntaxException e) {
           throw new IllegalArgumentException(Property.INSTANCE_VOLUMES_REPLACEMENTS.getKey()
               + " contains " + uris[1] + " which has a syntax error", e);
@@ -213,10 +224,12 @@
         baseDirs.add(new Path(baseDir));
       }
 
-      for (Pair<Path,Path> pair : ret)
-        if (!baseDirs.contains(pair.getSecond()))
+      for (Pair<Path,Path> pair : ret) {
+        if (!baseDirs.contains(pair.getSecond())) {
           throw new IllegalArgumentException(Property.INSTANCE_VOLUMES_REPLACEMENTS.getKey()
               + " contains " + pair.getSecond() + " which is not a configured volume");
+        }
+      }
 
       // only set if get here w/o exception
       replacementsList = ret;
diff --git a/server/base/src/main/java/org/apache/accumulo/server/ServerContext.java b/server/base/src/main/java/org/apache/accumulo/server/ServerContext.java
index ac9415a..d2345fe 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/ServerContext.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/ServerContext.java
@@ -28,11 +28,13 @@
 import org.apache.accumulo.core.conf.SiteConfiguration;
 import org.apache.accumulo.core.crypto.CryptoServiceFactory;
 import org.apache.accumulo.core.crypto.CryptoServiceFactory.ClassloaderType;
+import org.apache.accumulo.core.metadata.schema.Ample;
 import org.apache.accumulo.core.rpc.SslConnectionParams;
 import org.apache.accumulo.core.spi.crypto.CryptoService;
 import org.apache.accumulo.fate.zookeeper.ZooReaderWriter;
 import org.apache.accumulo.server.conf.ServerConfigurationFactory;
 import org.apache.accumulo.server.fs.VolumeManager;
+import org.apache.accumulo.server.metadata.ServerAmpleImpl;
 import org.apache.accumulo.server.rpc.SaslServerConnectionParams;
 import org.apache.accumulo.server.rpc.ThriftServerType;
 import org.apache.accumulo.server.security.SecurityUtil;
@@ -209,4 +211,8 @@
     return cryptoService;
   }
 
+  @Override
+  public Ample getAmple() {
+    return new ServerAmpleImpl(this);
+  }
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/ServiceEnvironmentImpl.java b/server/base/src/main/java/org/apache/accumulo/server/ServiceEnvironmentImpl.java
index 3f6465f..293cbbe 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/ServiceEnvironmentImpl.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/ServiceEnvironmentImpl.java
@@ -31,7 +31,6 @@
 import org.apache.accumulo.core.spi.common.ServiceEnvironment;
 
 import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableMap.Builder;
 
 public class ServiceEnvironmentImpl implements ServiceEnvironment {
 
@@ -58,8 +57,9 @@
       // Get prop to check if sensitive, also looking up by prop may be more efficient.
       Property prop = Property.getPropertyByKey(key);
       if (prop != null) {
-        if (prop.isSensitive())
+        if (prop.isSensitive()) {
           return null;
+        }
         return acfg.get(prop);
       } else {
         return acfg.get(key);
@@ -68,8 +68,9 @@
 
     @Override
     public Map<String,String> getCustom() {
-      if (customProps == null)
+      if (customProps == null) {
         customProps = buildCustom(Property.GENERAL_ARBITRARY_PROP_PREFIX);
+      }
 
       return customProps;
     }
@@ -81,8 +82,9 @@
 
     @Override
     public Map<String,String> getTableCustom() {
-      if (tableCustomProps == null)
+      if (tableCustomProps == null) {
         tableCustomProps = buildCustom(Property.TABLE_ARBITRARY_PROP_PREFIX);
+      }
 
       return tableCustomProps;
     }
@@ -95,7 +97,7 @@
     private Map<String,String> buildCustom(Property customPrefix) {
       // This could be optimized as described in #947
       Map<String,String> props = acfg.getAllPropertiesWithPrefix(customPrefix);
-      Builder<String,String> builder = ImmutableMap.builder();
+      var builder = ImmutableMap.<String,String>builder();
       props.forEach((k, v) -> {
         builder.put(k.substring(customPrefix.getKey().length()), v);
       });
@@ -127,13 +129,13 @@
 
   @Override
   public <T> T instantiate(String className, Class<T> base)
-      throws ClassNotFoundException, InstantiationException, IllegalAccessException, IOException {
+      throws ReflectiveOperationException, IOException {
     return ConfigurationTypeHelper.getClassInstance(null, className, base);
   }
 
   @Override
   public <T> T instantiate(TableId tableId, String className, Class<T> base)
-      throws ClassNotFoundException, InstantiationException, IllegalAccessException, IOException {
+      throws ReflectiveOperationException, IOException {
     String ctx =
         srvCtx.getServerConfFactory().getTableConfiguration(tableId).get(Property.TABLE_CLASSPATH);
     return ConfigurationTypeHelper.getClassInstance(ctx, className, base);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/cli/ServerUtilOpts.java b/server/base/src/main/java/org/apache/accumulo/server/cli/ServerUtilOpts.java
index 1578b9a..4635239 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/cli/ServerUtilOpts.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/cli/ServerUtilOpts.java
@@ -27,9 +27,9 @@
   public synchronized ServerContext getServerContext() {
     if (context == null) {
       if (getClientConfigFile() == null) {
-        context = new ServerContext(new SiteConfiguration());
+        context = new ServerContext(SiteConfiguration.auto());
       } else {
-        context = new ServerContext(new SiteConfiguration(), getClientProps());
+        context = new ServerContext(SiteConfiguration.auto(), getClientProps());
       }
     }
     return context;
diff --git a/server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java b/server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
index 76e190e..d9638a8 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
@@ -368,10 +368,9 @@
     try {
       shouldMatch = loader.loadClass(interfaceMatch);
       Class test = AccumuloVFSClassLoader.loadClass(className, shouldMatch);
-      test.newInstance();
+      test.getDeclaredConstructor().newInstance();
       return true;
-    } catch (ClassCastException | IllegalAccessException | InstantiationException
-        | ClassNotFoundException e) {
+    } catch (ClassCastException | ReflectiveOperationException e) {
       log.warn("Error checking object types", e);
       return false;
     }
@@ -404,7 +403,7 @@
       }
 
       Class<?> test = currentLoader.loadClass(className).asSubclass(shouldMatch);
-      test.newInstance();
+      test.getDeclaredConstructor().newInstance();
       return true;
     } catch (Exception e) {
       log.warn("Error checking object types", e);
@@ -440,7 +439,7 @@
       }
 
       Class<?> test = currentLoader.loadClass(className).asSubclass(shouldMatch);
-      test.newInstance();
+      test.getDeclaredConstructor().newInstance();
       return true;
     } catch (Exception e) {
       log.warn("Error checking object types", e);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/ConfigSanityCheck.java b/server/base/src/main/java/org/apache/accumulo/server/conf/ConfigSanityCheck.java
index fcffcfb..f7d89b7 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/ConfigSanityCheck.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/conf/ConfigSanityCheck.java
@@ -26,7 +26,7 @@
 public class ConfigSanityCheck implements KeywordExecutable {
 
   public static void main(String[] args) {
-    try (ServerContext context = new ServerContext(new SiteConfiguration())) {
+    try (var context = new ServerContext(SiteConfiguration.auto())) {
       context.getServerConfFactory().getSystemConfiguration();
     }
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfWatcher.java b/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfWatcher.java
deleted file mode 100644
index baaee2a..0000000
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfWatcher.java
+++ /dev/null
@@ -1,121 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.server.conf;
-
-import org.apache.accumulo.core.Constants;
-import org.apache.accumulo.core.data.NamespaceId;
-import org.apache.accumulo.server.ServerContext;
-import org.apache.zookeeper.WatchedEvent;
-import org.apache.zookeeper.Watcher;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-class NamespaceConfWatcher implements Watcher {
-
-  private static final Logger log = LoggerFactory.getLogger(NamespaceConfWatcher.class);
-  private final ServerContext context;
-  private final String namespacesPrefix;
-  private final int namespacesPrefixLength;
-
-  NamespaceConfWatcher(ServerContext context) {
-    this.context = context;
-    namespacesPrefix = context.getZooKeeperRoot() + Constants.ZNAMESPACES + "/";
-    namespacesPrefixLength = namespacesPrefix.length();
-  }
-
-  static String toString(WatchedEvent event) {
-    return new StringBuilder("{path=").append(event.getPath()).append(",state=")
-        .append(event.getState()).append(",type=").append(event.getType()).append("}").toString();
-  }
-
-  @Override
-  public void process(WatchedEvent event) {
-    String path = event.getPath();
-    if (log.isTraceEnabled()) {
-      log.trace("WatchedEvent : {}", toString(event));
-    }
-
-    String namespaceIdStr = null;
-    String key = null;
-
-    if (path != null) {
-      if (path.startsWith(namespacesPrefix)) {
-        namespaceIdStr = path.substring(namespacesPrefixLength);
-        if (namespaceIdStr.contains("/")) {
-          namespaceIdStr = namespaceIdStr.substring(0, namespaceIdStr.indexOf('/'));
-          if (path
-              .startsWith(namespacesPrefix + namespaceIdStr + Constants.ZNAMESPACE_CONF + "/")) {
-            key = path.substring(
-                (namespacesPrefix + namespaceIdStr + Constants.ZNAMESPACE_CONF + "/").length());
-          }
-        }
-      }
-
-      if (namespaceIdStr == null) {
-        log.warn("Zookeeper told me about a path I was not watching: {}, event {}", path,
-            toString(event));
-        return;
-      }
-    }
-    NamespaceId namespaceId = NamespaceId.of(namespaceIdStr);
-
-    switch (event.getType()) {
-      case NodeDataChanged:
-        if (log.isTraceEnabled()) {
-          log.trace("EventNodeDataChanged {}", event.getPath());
-        }
-        if (key != null) {
-          context.getServerConfFactory().getNamespaceConfiguration(namespaceId)
-              .propertyChanged(key);
-        }
-        break;
-      case NodeChildrenChanged:
-        context.getServerConfFactory().getNamespaceConfiguration(namespaceId).propertiesChanged();
-        break;
-      case NodeDeleted:
-        if (key == null) {
-          ServerConfigurationFactory.removeCachedNamespaceConfiguration(context.getInstanceID(),
-              namespaceId);
-        }
-        break;
-      case None:
-        switch (event.getState()) {
-          case Expired:
-            log.info("Zookeeper node event type None, state=expired. Expire all table observers");
-            ServerConfigurationFactory.expireAllTableObservers();
-            break;
-          case SyncConnected:
-            break;
-          case Disconnected:
-            break;
-          default:
-            log.warn("EventNone event not handled {}", toString(event));
-        }
-        break;
-      case NodeCreated:
-        switch (event.getState()) {
-          case SyncConnected:
-            break;
-          default:
-            log.warn("Event NodeCreated event not handled {}", toString(event));
-        }
-        break;
-      default:
-        log.warn("Event not handled {}", toString(event));
-    }
-  }
-}
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfiguration.java b/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfiguration.java
index eef0248..025a8d8 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfiguration.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfiguration.java
@@ -23,19 +23,14 @@
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.clientImpl.Namespace;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.ConfigurationObserver;
-import org.apache.accumulo.core.conf.ObservableConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.NamespaceId;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.accumulo.fate.zookeeper.ZooCacheFactory;
 import org.apache.accumulo.server.ServerContext;
 import org.apache.accumulo.server.conf.ZooCachePropertyAccessor.PropCacheKey;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
-public class NamespaceConfiguration extends ObservableConfiguration {
-  private static final Logger log = LoggerFactory.getLogger(NamespaceConfiguration.class);
+public class NamespaceConfiguration extends AccumuloConfiguration {
 
   private static final Map<PropCacheKey,ZooCache> propCaches = new java.util.HashMap<>();
 
@@ -74,8 +69,7 @@
       PropCacheKey key = new PropCacheKey(context.getInstanceID(), namespaceId.canonical());
       ZooCache propCache = propCaches.get(key);
       if (propCache == null) {
-        propCache = zcf.getZooCache(context.getZooKeepers(), context.getZooKeepersSessionTimeOut(),
-            new NamespaceConfWatcher(context));
+        propCache = zcf.getZooCache(context.getZooKeepers(), context.getZooKeepersSessionTimeOut());
         propCaches.put(key, propCache);
       }
       return propCache;
@@ -136,27 +130,6 @@
     return namespaceId;
   }
 
-  @Override
-  public void addObserver(ConfigurationObserver co) {
-    if (namespaceId == null) {
-      String err = "Attempt to add observer for non-namespace configuration";
-      log.error(err);
-      throw new RuntimeException(err);
-    }
-    iterator();
-    super.addObserver(co);
-  }
-
-  @Override
-  public void removeObserver(ConfigurationObserver co) {
-    if (namespaceId == null) {
-      String err = "Attempt to remove observer for non-namespace configuration";
-      log.error(err);
-      throw new RuntimeException(err);
-    }
-    super.removeObserver(co);
-  }
-
   static boolean isIteratorOrConstraint(String key) {
     return key.startsWith(Property.TABLE_ITERATOR_PREFIX.getKey())
         || key.startsWith(Property.TABLE_CONSTRAINT_PREFIX.getKey());
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/ServerConfigurationFactory.java b/server/base/src/main/java/org/apache/accumulo/server/conf/ServerConfigurationFactory.java
index 39ae6ec..f2c82f7 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/ServerConfigurationFactory.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/conf/ServerConfigurationFactory.java
@@ -77,16 +77,6 @@
     }
   }
 
-  static void expireAllTableObservers() {
-    synchronized (tableConfigs) {
-      for (Map<TableId,TableConfiguration> instanceMap : tableConfigs.values()) {
-        for (TableConfiguration c : instanceMap.values()) {
-          c.expireAllObservers();
-        }
-      }
-    }
-  }
-
   private final ServerContext context;
   private final SiteConfiguration siteConfig;
   private final String instanceID;
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfWatcher.java b/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfWatcher.java
deleted file mode 100644
index ffcb7eb..0000000
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfWatcher.java
+++ /dev/null
@@ -1,122 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.server.conf;
-
-import org.apache.accumulo.core.Constants;
-import org.apache.accumulo.core.data.TableId;
-import org.apache.accumulo.server.ServerContext;
-import org.apache.zookeeper.WatchedEvent;
-import org.apache.zookeeper.Watcher;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-class TableConfWatcher implements Watcher {
-
-  private static final Logger log = LoggerFactory.getLogger(TableConfWatcher.class);
-  private final ServerContext context;
-  private final String tablesPrefix;
-  private ServerConfigurationFactory scf;
-
-  TableConfWatcher(ServerContext context) {
-    this.context = context;
-    tablesPrefix = context.getZooKeeperRoot() + Constants.ZTABLES + "/";
-    scf = context.getServerConfFactory();
-  }
-
-  static String toString(WatchedEvent event) {
-    return new StringBuilder("{path=").append(event.getPath()).append(",state=")
-        .append(event.getState()).append(",type=").append(event.getType()).append("}").toString();
-  }
-
-  @Override
-  public void process(WatchedEvent event) {
-    String path = event.getPath();
-    if (log.isTraceEnabled()) {
-      log.trace("WatchedEvent : {}", toString(event));
-    }
-
-    String tableIdString = null;
-    String key = null;
-
-    if (path != null) {
-      if (path.startsWith(tablesPrefix)) {
-        tableIdString = path.substring(tablesPrefix.length());
-        if (tableIdString.contains("/")) {
-          tableIdString = tableIdString.substring(0, tableIdString.indexOf('/'));
-          if (path.startsWith(tablesPrefix + tableIdString + Constants.ZTABLE_CONF + "/")) {
-            key = path
-                .substring((tablesPrefix + tableIdString + Constants.ZTABLE_CONF + "/").length());
-          }
-        }
-      }
-
-      if (tableIdString == null) {
-        log.warn("Zookeeper told me about a path I was not watching: {}, event {}", path,
-            toString(event));
-        return;
-      }
-    }
-    TableId tableId = TableId.of(tableIdString);
-
-    switch (event.getType()) {
-      case NodeDataChanged:
-        if (log.isTraceEnabled()) {
-          log.trace("EventNodeDataChanged {}", event.getPath());
-        }
-        if (key != null) {
-          scf.getTableConfiguration(tableId).propertyChanged(key);
-        }
-        break;
-      case NodeChildrenChanged:
-        scf.getTableConfiguration(tableId).propertiesChanged();
-        break;
-      case NodeDeleted:
-        if (key == null) {
-          // only remove the AccumuloConfiguration object when a
-          // table node is deleted, not when a tables property is
-          // deleted.
-          ServerConfigurationFactory.removeCachedTableConfiguration(context.getInstanceID(),
-              tableId);
-        }
-        break;
-      case None:
-        switch (event.getState()) {
-          case Expired:
-            log.info("Zookeeper node event type None, state=expired. Expire all table observers");
-            ServerConfigurationFactory.expireAllTableObservers();
-            break;
-          case SyncConnected:
-            break;
-          case Disconnected:
-            break;
-          default:
-            log.warn("EventNone event not handled {}", toString(event));
-        }
-        break;
-      case NodeCreated:
-        switch (event.getState()) {
-          case SyncConnected:
-            break;
-          default:
-            log.warn("Event NodeCreated event not handled {}", toString(event));
-        }
-        break;
-      default:
-        log.warn("Event not handled {}", toString(event));
-    }
-  }
-}
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfiguration.java b/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfiguration.java
index 28d7e25..6092fe9 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfiguration.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfiguration.java
@@ -28,9 +28,8 @@
 import java.util.function.Predicate;
 
 import org.apache.accumulo.core.Constants;
-import org.apache.accumulo.core.conf.ConfigurationObserver;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.IterConfigUtil;
-import org.apache.accumulo.core.conf.ObservableConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.TableId;
 import org.apache.accumulo.core.dataImpl.thrift.IterInfo;
@@ -42,15 +41,10 @@
 import org.apache.accumulo.server.ServerContext;
 import org.apache.accumulo.server.ServiceEnvironmentImpl;
 import org.apache.accumulo.server.conf.ZooCachePropertyAccessor.PropCacheKey;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
-import com.google.common.collect.ImmutableList;
 import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableMap.Builder;
 
-public class TableConfiguration extends ObservableConfiguration {
-  private static final Logger log = LoggerFactory.getLogger(TableConfiguration.class);
+public class TableConfiguration extends AccumuloConfiguration {
 
   private static final Map<PropCacheKey,ZooCache> propCaches = new java.util.HashMap<>();
 
@@ -62,7 +56,9 @@
 
   private final TableId tableId;
 
-  private EnumMap<IteratorScope,AtomicReference<ParsedIteratorConfig>> iteratorConfig;
+  private final EnumMap<IteratorScope,Deriver<ParsedIteratorConfig>> iteratorConfig;
+
+  private final Deriver<ScanDispatcher> scanDispatchDeriver;
 
   public TableConfiguration(ServerContext context, TableId tableId, NamespaceConfiguration parent) {
     this.context = requireNonNull(context);
@@ -71,8 +67,16 @@
 
     iteratorConfig = new EnumMap<>(IteratorScope.class);
     for (IteratorScope scope : IteratorScope.values()) {
-      iteratorConfig.put(scope, new AtomicReference<>(null));
+      iteratorConfig.put(scope, newDeriver(conf -> {
+        Map<String,Map<String,String>> allOpts = new HashMap<>();
+        List<IterInfo> iters =
+            IterConfigUtil.parseIterConf(scope, Collections.emptyList(), allOpts, conf);
+        return new ParsedIteratorConfig(iters, allOpts, conf.get(Property.TABLE_CLASSPATH));
+
+      }));
     }
+
+    scanDispatchDeriver = newDeriver(conf -> createScanDispatcher(conf, context, tableId));
   }
 
   void setZooCacheFactory(ZooCacheFactory zcf) {
@@ -84,8 +88,7 @@
       PropCacheKey key = new PropCacheKey(context.getInstanceID(), tableId.canonical());
       ZooCache propCache = propCaches.get(key);
       if (propCache == null) {
-        propCache = zcf.getZooCache(context.getZooKeepers(), context.getZooKeepersSessionTimeOut(),
-            new TableConfWatcher(context));
+        propCache = zcf.getZooCache(context.getZooKeepers(), context.getZooKeepersSessionTimeOut());
         propCaches.put(key, propCache);
       }
       return propCache;
@@ -95,46 +98,28 @@
   private ZooCachePropertyAccessor getPropCacheAccessor() {
     // updateAndGet below always calls compare and set, so avoid if not null
     ZooCachePropertyAccessor zcpa = propCacheAccessor.get();
-    if (zcpa != null)
+    if (zcpa != null) {
       return zcpa;
+    }
 
     return propCacheAccessor
         .updateAndGet(pca -> pca == null ? new ZooCachePropertyAccessor(getZooCache()) : pca);
   }
 
-  @Override
-  public void addObserver(ConfigurationObserver co) {
-    if (tableId == null) {
-      String err = "Attempt to add observer for non-table configuration";
-      log.error(err);
-      throw new RuntimeException(err);
-    }
-    iterator();
-    super.addObserver(co);
-  }
-
-  @Override
-  public void removeObserver(ConfigurationObserver co) {
-    if (tableId == null) {
-      String err = "Attempt to remove observer for non-table configuration";
-      log.error(err);
-      throw new RuntimeException(err);
-    }
-    super.removeObserver(co);
-  }
-
   private String getPath() {
     return context.getZooKeeperRoot() + Constants.ZTABLES + "/" + tableId + Constants.ZTABLE_CONF;
   }
 
   @Override
   public boolean isPropertySet(Property prop, boolean cacheAndWatch) {
-    if (!cacheAndWatch)
+    if (!cacheAndWatch) {
       throw new UnsupportedOperationException(
           "Table configuration only supports checking if a property is set in cache.");
+    }
 
-    if (getPropCacheAccessor().isPropertySet(prop, getPath()))
+    if (getPropCacheAccessor().isPropertySet(prop, getPath())) {
       return true;
+    }
 
     return parent.isPropertySet(prop, cacheAndWatch);
   }
@@ -195,18 +180,16 @@
     private final List<IterInfo> tableIters;
     private final Map<String,Map<String,String>> tableOpts;
     private final String context;
-    private final long updateCount;
 
     private ParsedIteratorConfig(List<IterInfo> ii, Map<String,Map<String,String>> opts,
-        String context, long updateCount) {
-      this.tableIters = ImmutableList.copyOf(ii);
-      Builder<String,Map<String,String>> imb = ImmutableMap.builder();
+        String context) {
+      this.tableIters = List.copyOf(ii);
+      var imb = ImmutableMap.<String,Map<String,String>>builder();
       for (Entry<String,Map<String,String>> entry : opts.entrySet()) {
-        imb.put(entry.getKey(), ImmutableMap.copyOf(entry.getValue()));
+        imb.put(entry.getKey(), Map.copyOf(entry.getValue()));
       }
       tableOpts = imb.build();
       this.context = context;
-      this.updateCount = updateCount;
     }
 
     public List<IterInfo> getIterInfo() {
@@ -223,71 +206,43 @@
   }
 
   public ParsedIteratorConfig getParsedIteratorConfig(IteratorScope scope) {
-    long count = getUpdateCount();
-    AtomicReference<ParsedIteratorConfig> ref = iteratorConfig.get(scope);
-    ParsedIteratorConfig pic = ref.get();
-    if (pic == null || pic.updateCount != count) {
-      Map<String,Map<String,String>> allOpts = new HashMap<>();
-      List<IterInfo> iters =
-          IterConfigUtil.parseIterConf(scope, Collections.emptyList(), allOpts, this);
-      ParsedIteratorConfig newPic =
-          new ParsedIteratorConfig(iters, allOpts, get(Property.TABLE_CLASSPATH), count);
-      ref.compareAndSet(pic, newPic);
-      pic = newPic;
-    }
-
-    return pic;
+    return iteratorConfig.get(scope).derive();
   }
 
-  public static class TablesScanDispatcher {
-    public final ScanDispatcher dispatcher;
-    public final long count;
+  private static ScanDispatcher createScanDispatcher(AccumuloConfiguration conf,
+      ServerContext context, TableId tableId) {
+    ScanDispatcher newDispatcher = Property.createTableInstanceFromPropertyName(conf,
+        Property.TABLE_SCAN_DISPATCHER, ScanDispatcher.class, null);
 
-    public TablesScanDispatcher(ScanDispatcher dispatcher, long count) {
-      this.dispatcher = dispatcher;
-      this.count = count;
-    }
+    var builder = ImmutableMap.<String,String>builder();
+    conf.getAllPropertiesWithPrefix(Property.TABLE_SCAN_DISPATCHER_OPTS).forEach((k, v) -> {
+      String optKey = k.substring(Property.TABLE_SCAN_DISPATCHER_OPTS.getKey().length());
+      builder.put(optKey, v);
+    });
+
+    Map<String,String> opts = builder.build();
+
+    newDispatcher.init(new ScanDispatcher.InitParameters() {
+      @Override
+      public TableId getTableId() {
+        return tableId;
+      }
+
+      @Override
+      public Map<String,String> getOptions() {
+        return opts;
+      }
+
+      @Override
+      public ServiceEnvironment getServiceEnv() {
+        return new ServiceEnvironmentImpl(context);
+      }
+    });
+
+    return newDispatcher;
   }
 
-  private AtomicReference<TablesScanDispatcher> scanDispatcherRef = new AtomicReference<>();
-
   public ScanDispatcher getScanDispatcher() {
-    long count = getUpdateCount();
-    TablesScanDispatcher currRef = scanDispatcherRef.get();
-    if (currRef == null || currRef.count != count) {
-      ScanDispatcher newDispatcher = Property.createTableInstanceFromPropertyName(this,
-          Property.TABLE_SCAN_DISPATCHER, ScanDispatcher.class, null);
-
-      Builder<String,String> builder = ImmutableMap.builder();
-      getAllPropertiesWithPrefix(Property.TABLE_SCAN_DISPATCHER_OPTS).forEach((k, v) -> {
-        String optKey = k.substring(Property.TABLE_SCAN_DISPATCHER_OPTS.getKey().length());
-        builder.put(optKey, v);
-      });
-
-      Map<String,String> opts = builder.build();
-
-      newDispatcher.init(new ScanDispatcher.InitParameters() {
-        @Override
-        public TableId getTableId() {
-          return tableId;
-        }
-
-        @Override
-        public Map<String,String> getOptions() {
-          return opts;
-        }
-
-        @Override
-        public ServiceEnvironment getServiceEnv() {
-          return new ServiceEnvironmentImpl(context);
-        }
-      });
-
-      TablesScanDispatcher newRef = new TablesScanDispatcher(newDispatcher, count);
-      scanDispatcherRef.compareAndSet(currRef, newRef);
-      currRef = newRef;
-    }
-
-    return currRef.dispatcher;
+    return scanDispatchDeriver.derive();
   }
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/constraints/MetadataConstraints.java b/server/base/src/main/java/org/apache/accumulo/server/constraints/MetadataConstraints.java
index 4e9ad98..783b515 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/constraints/MetadataConstraints.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/constraints/MetadataConstraints.java
@@ -34,17 +34,18 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.BulkFileColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ChoppedColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ClonedColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.DataFileColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LogColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ScanFileColumnFamily;
 import org.apache.accumulo.core.util.ColumnFQ;
+import org.apache.accumulo.core.util.cleaner.CleanerUtil;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.accumulo.fate.zookeeper.ZooLock;
 import org.apache.accumulo.fate.zookeeper.ZooUtil;
 import org.apache.accumulo.server.ServerContext;
-import org.apache.accumulo.server.util.MetadataTableUtil;
 import org.apache.accumulo.server.zookeeper.TransactionWatcher.Arbitrator;
 import org.apache.accumulo.server.zookeeper.TransactionWatcher.ZooArbitrator;
 import org.apache.hadoop.io.Text;
@@ -53,14 +54,13 @@
 
 public class MetadataConstraints implements Constraint {
 
+  private static final Logger log = LoggerFactory.getLogger(MetadataConstraints.class);
+
   private ZooCache zooCache = null;
   private String zooRoot = null;
 
-  private static final Logger log = LoggerFactory.getLogger(MetadataConstraints.class);
-
-  private static boolean[] validTableNameChars = new boolean[256];
-
-  {
+  private static final boolean[] validTableNameChars = new boolean[256];
+  static {
     for (int i = 0; i < 256; i++) {
       validTableNameChars[i] =
           ((i >= 'a' && i <= 'z') || (i >= '0' && i <= '9')) || i == '!' || i == '+';
@@ -230,7 +230,7 @@
           }
 
           if (!isSplitMutation && !isLocationMutation) {
-            long tid = MetadataTableUtil.getBulkLoadTid(new Value(tidString));
+            long tid = BulkFileColumnFamily.getBulkLoadTid(new Value(tidString));
 
             try {
               if (otherTidCount > 0 || !dataFiles.equals(loadedFiles) || !getArbitrator(context)
@@ -265,6 +265,7 @@
             .equals(TabletsSection.ServerColumnFamily.LOCK_COLUMN)) {
           if (zooCache == null) {
             zooCache = new ZooCache(context.getZooReaderWriter(), null);
+            CleanerUtil.zooCacheClearer(this, zooCache);
           }
 
           if (zooRoot == null) {
@@ -327,9 +328,4 @@
     return null;
   }
 
-  @Override
-  protected void finalize() {
-    if (zooCache != null)
-      zooCache.clear();
-  }
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/fs/FileRef.java b/server/base/src/main/java/org/apache/accumulo/server/fs/FileRef.java
index f0f21ae..c149cdd 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/fs/FileRef.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/fs/FileRef.java
@@ -17,6 +17,8 @@
 package org.apache.accumulo.server.fs;
 
 import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.TableId;
+import org.apache.accumulo.core.metadata.schema.Ample;
 import org.apache.accumulo.server.fs.VolumeManager.FileType;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
@@ -26,7 +28,7 @@
  * contain old relative file references. This class keeps track of the short file reference, so it
  * can be removed properly from the metadata tables.
  */
-public class FileRef implements Comparable<FileRef> {
+public class FileRef implements Ample.FileMeta, Comparable<FileRef> {
   private String metaReference; // something like ../2/d-00000/A00001.rf
   private Path fullReference; // something like hdfs://nn:9001/accumulo/tables/2/d-00000/A00001.rf
   private Path suffix;
@@ -35,6 +37,10 @@
     this(key.getColumnQualifier().toString(), fs.getFullPath(key));
   }
 
+  public FileRef(VolumeManager fs, String metaReference, TableId tableId) {
+    this(metaReference, fs.getFullPath(tableId, metaReference));
+  }
+
   public FileRef(String metaReference, Path fullReference) {
     this.metaReference = metaReference;
     this.fullReference = fullReference;
@@ -50,10 +56,12 @@
     return fullReference.toString();
   }
 
+  @Override
   public Path path() {
     return fullReference;
   }
 
+  @Override
   public Text meta() {
     return new Text(metaReference);
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/fs/PreferredVolumeChooser.java b/server/base/src/main/java/org/apache/accumulo/server/fs/PreferredVolumeChooser.java
index 0f9d626..d290e9b 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/fs/PreferredVolumeChooser.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/fs/PreferredVolumeChooser.java
@@ -22,7 +22,6 @@
 
 import org.apache.accumulo.core.volume.Volume;
 import org.apache.accumulo.server.fs.VolumeChooserEnvironment.ChooserScope;
-import org.apache.commons.lang3.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -126,8 +125,8 @@
   private String[] parsePreferred(String property, String preferredVolumes, String[] options) {
     log.trace("Found {} = {}", property, preferredVolumes);
 
-    Set<String> preferred = Arrays.stream(StringUtils.split(preferredVolumes, ','))
-        .map(String::trim).collect(Collectors.toSet());
+    Set<String> preferred =
+        Arrays.stream(preferredVolumes.split(",")).map(String::trim).collect(Collectors.toSet());
     if (preferred.isEmpty()) {
       String msg = "No volumes could be parsed from '" + property + "', which had a value of '"
           + preferredVolumes + "'";
diff --git a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java
index 0edebc8..75f73f1 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java
@@ -41,7 +41,6 @@
 import org.apache.accumulo.core.volume.VolumeConfiguration;
 import org.apache.accumulo.server.fs.VolumeChooser.VolumeChooserException;
 import org.apache.commons.lang3.ArrayUtils;
-import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.ContentSummary;
 import org.apache.hadoop.fs.CreateFlag;
@@ -454,7 +453,7 @@
     // in the relative path. Fail when this doesn't appear to happen.
     if (fileType == FileType.TABLE) {
       // Trailing slash doesn't create an additional element
-      String[] pathComponents = StringUtils.split(path, Path.SEPARATOR_CHAR);
+      String[] pathComponents = path.split(Path.SEPARATOR);
 
       // Is an rfile
       if (path.endsWith(RFILE_SUFFIX)) {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java
index 6805962..4f0b3ab 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java
@@ -18,7 +18,6 @@
 
 import java.io.FileNotFoundException;
 import java.io.IOException;
-import java.security.SecureRandom;
 import java.util.ArrayList;
 import java.util.HashSet;
 import java.util.List;
@@ -27,6 +26,7 @@
 import java.util.TreeMap;
 
 import org.apache.accumulo.core.dataImpl.KeyExtent;
+import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
 import org.apache.accumulo.core.protobuf.ProtobufUtil;
 import org.apache.accumulo.core.tabletserver.log.LogEntry;
@@ -43,7 +43,6 @@
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.fs.Path;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -54,7 +53,6 @@
 public class VolumeUtil {
 
   private static final Logger log = LoggerFactory.getLogger(VolumeUtil.class);
-  private static final SecureRandom rand = new SecureRandom();
 
   private static boolean isActiveVolume(ServerContext context, Path dir) {
 
@@ -162,7 +160,7 @@
     String newLocation = switchVolume(location, FileType.TABLE,
         ServerConstants.getVolumeReplacements(context.getConfiguration(), context.getHadoopConf()));
     if (newLocation != null) {
-      MetadataTableUtil.setRootTabletDir(context, newLocation);
+      context.getAmple().mutateTablet(RootTable.EXTENT).putDir(newLocation).mutate();
       log.info("Volume replaced: {} -> {}", location, newLocation);
       return new Path(newLocation).toString();
     }
@@ -202,21 +200,17 @@
       }
     }
 
-    if (extent.isRootTablet()) {
-      ret.datafiles = tabletFiles.datafiles;
-    } else {
-      for (Entry<FileRef,DataFileValue> entry : tabletFiles.datafiles.entrySet()) {
-        String metaPath = entry.getKey().meta().toString();
-        String switchedPath = switchVolume(metaPath, FileType.TABLE, replacements);
-        if (switchedPath != null) {
-          filesToRemove.add(entry.getKey());
-          FileRef switchedRef = new FileRef(switchedPath, new Path(switchedPath));
-          filesToAdd.put(switchedRef, entry.getValue());
-          ret.datafiles.put(switchedRef, entry.getValue());
-          log.debug("Replacing volume {} : {} -> {}", extent, metaPath, switchedPath);
-        } else {
-          ret.datafiles.put(entry.getKey(), entry.getValue());
-        }
+    for (Entry<FileRef,DataFileValue> entry : tabletFiles.datafiles.entrySet()) {
+      String metaPath = entry.getKey().meta().toString();
+      String switchedPath = switchVolume(metaPath, FileType.TABLE, replacements);
+      if (switchedPath != null) {
+        filesToRemove.add(entry.getKey());
+        FileRef switchedRef = new FileRef(switchedPath, new Path(switchedPath));
+        filesToAdd.put(switchedRef, entry.getValue());
+        ret.datafiles.put(switchedRef, entry.getValue());
+        log.debug("Replacing volume {} : {} -> {}", extent, metaPath, switchedPath);
+      } else {
+        ret.datafiles.put(entry.getKey(), entry.getValue());
       }
     }
 
@@ -275,52 +269,10 @@
         + Path.SEPARATOR + dir.getName());
 
     log.info("Updating directory for {} from {} to {}", extent, dir, newDir);
-    if (extent.isRootTablet()) {
-      // the root tablet is special case, its files need to be copied if its dir is changed
 
-      // this code needs to be idempotent
+    MetadataTableUtil.updateTabletDir(extent, newDir.toString(), context, zooLock);
+    return newDir.toString();
 
-      FileSystem fs1 = vm.getVolumeByPath(dir).getFileSystem();
-      FileSystem fs2 = vm.getVolumeByPath(newDir).getFileSystem();
-
-      if (!same(fs1, dir, fs2, newDir)) {
-        if (fs2.exists(newDir)) {
-          Path newDirBackup = getBackupName(newDir);
-          // never delete anything because were dealing with the root tablet
-          // one reason this dir may exist is because this method failed previously
-          log.info("renaming {} to {}", newDir, newDirBackup);
-          if (!fs2.rename(newDir, newDirBackup)) {
-            throw new IOException("Failed to rename " + newDir + " to " + newDirBackup);
-          }
-        }
-
-        // do a lot of logging since this is the root tablet
-        log.info("copying {} to {}", dir, newDir);
-        if (!FileUtil.copy(fs1, dir, fs2, newDir, false, context.getHadoopConf())) {
-          throw new IOException("Failed to copy " + dir + " to " + newDir);
-        }
-
-        // only set the new location in zookeeper after a successful copy
-        log.info("setting root tablet location to {}", newDir);
-        MetadataTableUtil.setRootTabletDir(context, newDir.toString());
-
-        // rename the old dir to avoid confusion when someone looks at filesystem... its ok if we
-        // fail here and this does not happen because the location in
-        // zookeeper is the authority
-        Path dirBackup = getBackupName(dir);
-        log.info("renaming {} to {}", dir, dirBackup);
-        fs1.rename(dir, dirBackup);
-
-      } else {
-        log.info("setting root tablet location to {}", newDir);
-        MetadataTableUtil.setRootTabletDir(context, newDir.toString());
-      }
-
-      return newDir.toString();
-    } else {
-      MetadataTableUtil.updateTabletDir(extent, newDir.toString(), context, zooLock);
-      return newDir.toString();
-    }
   }
 
   static boolean same(FileSystem fs1, Path dir, FileSystem fs2, Path newDir)
@@ -371,10 +323,4 @@
     }
 
   }
-
-  private static Path getBackupName(Path path) {
-    return new Path(path.getParent(), path.getName() + "_" + System.currentTimeMillis() + "_"
-        + (rand.nextInt(Integer.MAX_VALUE) + 1) + ".bak");
-  }
-
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java b/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java
index 414e5ad..236637b 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java
@@ -40,6 +40,7 @@
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.IteratorSetting.Column;
+import org.apache.accumulo.core.client.admin.TimeType;
 import org.apache.accumulo.core.clientImpl.Namespace;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.DefaultConfiguration;
@@ -69,6 +70,8 @@
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LogColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.TabletColumnFamily;
+import org.apache.accumulo.core.metadata.schema.MetadataTime;
+import org.apache.accumulo.core.metadata.schema.RootTabletMetadata;
 import org.apache.accumulo.core.replication.ReplicationConstants;
 import org.apache.accumulo.core.replication.ReplicationSchema.StatusSection;
 import org.apache.accumulo.core.replication.ReplicationSchema.WorkSection;
@@ -93,12 +96,12 @@
 import org.apache.accumulo.server.fs.VolumeManagerImpl;
 import org.apache.accumulo.server.iterators.MetadataBulkLoadFilter;
 import org.apache.accumulo.server.log.WalStateManager;
+import org.apache.accumulo.server.metadata.RootGcCandidates;
 import org.apache.accumulo.server.replication.ReplicationUtil;
 import org.apache.accumulo.server.replication.StatusCombiner;
 import org.apache.accumulo.server.security.AuditedSecurityOperation;
 import org.apache.accumulo.server.security.SecurityUtil;
 import org.apache.accumulo.server.tables.TableManager;
-import org.apache.accumulo.server.tablets.TabletTime;
 import org.apache.accumulo.server.util.ReplicationTableUtil;
 import org.apache.accumulo.server.util.SystemPropUtil;
 import org.apache.accumulo.server.util.TablePropUtil;
@@ -363,15 +366,18 @@
         fs.choose(chooserEnv, configuredVolumes) + Path.SEPARATOR + ServerConstants.TABLE_DIR
             + Path.SEPARATOR + RootTable.ID + RootTable.ROOT_TABLET_LOCATION).toString();
 
+    String ext = FileOperations.getNewFileExtension(DefaultConfiguration.getInstance());
+    String rootTabletFileName = rootTabletDir + Path.SEPARATOR + "00000_00000." + ext;
+
     try {
-      initZooKeeper(opts, uuid.toString(), instanceNamePath, rootTabletDir);
+      initZooKeeper(opts, uuid.toString(), instanceNamePath, rootTabletDir, rootTabletFileName);
     } catch (Exception e) {
       log.error("FATAL: Failed to initialize zookeeper", e);
       return false;
     }
 
     try {
-      initFileSystem(siteConfig, hadoopConf, fs, uuid, rootTabletDir);
+      initFileSystem(siteConfig, hadoopConf, fs, uuid, rootTabletDir, rootTabletFileName);
     } catch (Exception e) {
       log.error("FATAL Failed to initialize filesystem", e);
 
@@ -402,7 +408,7 @@
       // If they did not, fall back to the credentials present in accumulo.properties that the
       // servers will use themselves.
       try {
-        final SiteConfiguration siteConf = context.getServerConfFactory().getSiteConfiguration();
+        final var siteConf = context.getServerConfFactory().getSiteConfiguration();
         if (siteConf.getBoolean(Property.INSTANCE_RPC_SASL_ENABLED)) {
           final UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
           // We don't have any valid creds to talk to HDFS
@@ -487,7 +493,8 @@
   }
 
   private void initFileSystem(SiteConfiguration siteConfig, Configuration hadoopConf,
-      VolumeManager fs, UUID uuid, String rootTabletDir) throws IOException {
+      VolumeManager fs, UUID uuid, String rootTabletDir, String rootTabletFileName)
+      throws IOException {
     initDirs(fs, uuid, VolumeConfiguration.getVolumeUris(siteConfig, hadoopConf), false);
 
     // initialize initial system tables config in zookeeper
@@ -521,7 +528,6 @@
     createMetadataFile(fs, metadataFileName, siteConfig, replicationTablet);
 
     // populate the root tablet with info about the metadata table's two initial tablets
-    String rootTabletFileName = rootTabletDir + Path.SEPARATOR + "00000_00000." + ext;
     Text splitPoint = TabletsSection.getRange().getEndKey().getRow();
     Tablet tablesTablet =
         new Tablet(MetadataTable.ID, tableMetadataTabletDir, null, splitPoint, metadataFileName);
@@ -570,8 +576,7 @@
     Value EMPTY_SIZE = new DataFileValue(0, 0).encodeAsValue();
     Text extent = new Text(TabletsSection.getRow(tablet.tableId, tablet.endRow));
     addEntry(map, extent, DIRECTORY_COLUMN, new Value(tablet.dir.getBytes(UTF_8)));
-    addEntry(map, extent, TIME_COLUMN,
-        new Value((TabletTime.LOGICAL_TIME_ID + "0").getBytes(UTF_8)));
+    addEntry(map, extent, TIME_COLUMN, new Value(new MetadataTime(0, TimeType.LOGICAL).encode()));
     addEntry(map, extent, PREV_ROW_COLUMN, KeyExtent.encodePrevEndRow(tablet.prevEndRow));
     for (String file : tablet.files) {
       addEntry(map, extent, new ColumnFQ(DataFileColumnFamily.NAME, new Text(file)), EMPTY_SIZE);
@@ -602,7 +607,8 @@
   }
 
   private static void initZooKeeper(Opts opts, String uuid, String instanceNamePath,
-      String rootTabletDir) throws KeeperException, InterruptedException {
+      String rootTabletDir, String rootTabletFileName)
+      throws KeeperException, InterruptedException {
     // setup basic data in zookeeper
     zoo.putPersistentData(Constants.ZROOT, new byte[0], -1, NodeExistsPolicy.SKIP,
         Ids.OPEN_ACL_UNSAFE);
@@ -639,14 +645,11 @@
         NodeExistsPolicy.FAIL);
     zoo.putPersistentData(zkInstanceRoot + Constants.ZPROBLEMS, EMPTY_BYTE_ARRAY,
         NodeExistsPolicy.FAIL);
-    zoo.putPersistentData(zkInstanceRoot + RootTable.ZROOT_TABLET, EMPTY_BYTE_ARRAY,
+    zoo.putPersistentData(zkInstanceRoot + RootTable.ZROOT_TABLET,
+        RootTabletMetadata.getInitialJson(rootTabletDir, rootTabletFileName),
         NodeExistsPolicy.FAIL);
-    zoo.putPersistentData(zkInstanceRoot + RootTable.ZROOT_TABLET_WALOGS, EMPTY_BYTE_ARRAY,
-        NodeExistsPolicy.FAIL);
-    zoo.putPersistentData(zkInstanceRoot + RootTable.ZROOT_TABLET_CURRENT_LOGS, EMPTY_BYTE_ARRAY,
-        NodeExistsPolicy.FAIL);
-    zoo.putPersistentData(zkInstanceRoot + RootTable.ZROOT_TABLET_PATH,
-        rootTabletDir.getBytes(UTF_8), NodeExistsPolicy.FAIL);
+    zoo.putPersistentData(zkInstanceRoot + RootTable.ZROOT_TABLET_GC_CANDIDATES,
+        new RootGcCandidates().toJson().getBytes(UTF_8), NodeExistsPolicy.FAIL);
     zoo.putPersistentData(zkInstanceRoot + Constants.ZMASTERS, EMPTY_BYTE_ARRAY,
         NodeExistsPolicy.FAIL);
     zoo.putPersistentData(zkInstanceRoot + Constants.ZMASTER_LOCK, EMPTY_BYTE_ARRAY,
@@ -951,7 +954,7 @@
   public void execute(final String[] args) {
     Opts opts = new Opts();
     opts.parseArgs("accumulo init", args);
-    SiteConfiguration siteConfig = new SiteConfiguration();
+    var siteConfig = SiteConfiguration.auto();
 
     try {
       setZooReaderWriter(new ZooReaderWriter(siteConfig));
diff --git a/server/base/src/main/java/org/apache/accumulo/server/iterators/MetadataBulkLoadFilter.java b/server/base/src/main/java/org/apache/accumulo/server/iterators/MetadataBulkLoadFilter.java
index 38ebe85..816e900 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/iterators/MetadataBulkLoadFilter.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/iterators/MetadataBulkLoadFilter.java
@@ -28,8 +28,8 @@
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.BulkFileColumnFamily;
 import org.apache.accumulo.server.ServerContext;
-import org.apache.accumulo.server.util.MetadataTableUtil;
 import org.apache.accumulo.server.zookeeper.TransactionWatcher.Arbitrator;
 import org.apache.accumulo.server.zookeeper.TransactionWatcher.ZooArbitrator;
 import org.slf4j.Logger;
@@ -51,7 +51,7 @@
   @Override
   public boolean accept(Key k, Value v) {
     if (!k.isDeleted() && k.compareColumnFamily(TabletsSection.BulkFileColumnFamily.NAME) == 0) {
-      long txid = MetadataTableUtil.getBulkLoadTid(v);
+      long txid = BulkFileColumnFamily.getBulkLoadTid(v);
 
       Status status = bulkTxStatusCache.get(txid);
       if (status == null) {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/DefaultLoadBalancer.java b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/DefaultLoadBalancer.java
index 021b5ab..036d555 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/DefaultLoadBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/DefaultLoadBalancer.java
@@ -150,8 +150,7 @@
       }
 
       // order from low to high
-      Collections.sort(totals);
-      Collections.reverse(totals);
+      Collections.sort(totals, Collections.reverseOrder());
       int even = total / totals.size();
       int numServersOverEven = total % totals.size();
 
@@ -205,10 +204,11 @@
   List<TabletMigration> move(ServerCounts tooMuch, ServerCounts tooLittle, int count,
       Map<TableId,Map<KeyExtent,TabletStats>> donerTabletStats) {
 
-    List<TabletMigration> result = new ArrayList<>();
-    if (count == 0)
-      return result;
+    if (count == 0) {
+      return Collections.emptyList();
+    }
 
+    List<TabletMigration> result = new ArrayList<>();
     // Copy counts so we can update them as we propose migrations
     Map<TableId,Integer> tooMuchMap = tabletCountsPerTable(tooMuch.status);
     Map<TableId,Integer> tooLittleMap = tabletCountsPerTable(tooLittle.status);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
index fb0cf9d..83a71a6 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
@@ -19,6 +19,8 @@
 
 import static com.google.common.base.Preconditions.checkArgument;
 import static com.google.common.base.Preconditions.checkState;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LOCATION;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.PREV_ROW;
 
 import java.util.ArrayList;
 import java.util.Collection;
@@ -79,8 +81,8 @@
   protected Iterable<Pair<KeyExtent,Location>> getLocationProvider() {
     return () -> {
       try {
-        return TabletsMetadata.builder().forTable(tableId).fetchLocation().fetchPrev()
-            .build(context).stream().map(tm -> {
+        return TabletsMetadata.builder().forTable(tableId).fetch(LOCATION, PREV_ROW).build(context)
+            .stream().map(tm -> {
               Location loc = Location.NONE;
               if (tm.hasCurrent()) {
                 loc = new Location(new TServerInstance(tm.getLocation()));
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancer.java b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancer.java
index 8258bf3..8ef2048 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancer.java
@@ -31,10 +31,12 @@
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
+import java.util.concurrent.TimeUnit;
 import java.util.regex.Pattern;
 
 import org.apache.accumulo.core.client.admin.TableOperations;
-import org.apache.accumulo.core.conf.ConfigurationObserver;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.AccumuloConfiguration.Deriver;
 import org.apache.accumulo.core.conf.ConfigurationTypeHelper;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.TableId;
@@ -43,7 +45,6 @@
 import org.apache.accumulo.core.master.thrift.TabletServerStatus;
 import org.apache.accumulo.core.tabletserver.thrift.TabletStats;
 import org.apache.accumulo.server.ServerContext;
-import org.apache.accumulo.server.conf.ServerConfiguration;
 import org.apache.accumulo.server.master.state.TServerInstance;
 import org.apache.accumulo.server.master.state.TabletMigration;
 import org.apache.commons.lang3.builder.ToStringBuilder;
@@ -52,8 +53,10 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import com.google.common.cache.CacheBuilder;
+import com.google.common.cache.CacheLoader;
+import com.google.common.cache.LoadingCache;
 import com.google.common.collect.HashMultimap;
-import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.Iterables;
 import com.google.common.collect.Multimap;
 
@@ -82,7 +85,7 @@
  * <b>table.custom.balancer.host.regex.max.outstanding.migrations</b>
  *
  */
-public class HostRegexTableLoadBalancer extends TableLoadBalancer implements ConfigurationObserver {
+public class HostRegexTableLoadBalancer extends TableLoadBalancer {
 
   private static final String PROP_PREFIX = Property.TABLE_ARBITRARY_PROP_PREFIX.getKey();
 
@@ -101,22 +104,83 @@
   public static final String HOST_BALANCER_OUTSTANDING_MIGRATIONS_KEY =
       PROP_PREFIX + "balancer.host.regex.max.outstanding.migrations";
 
-  protected long oobCheckMillis =
-      ConfigurationTypeHelper.getTimeInMillis(HOST_BALANCER_OOB_DEFAULT);
+  private static Map<String,String> getRegexes(AccumuloConfiguration aconf) {
+    Map<String,String> regexes = new HashMap<>();
+    Map<String,String> customProps =
+        aconf.getAllPropertiesWithPrefix(Property.TABLE_ARBITRARY_PROP_PREFIX);
+
+    if (customProps != null && customProps.size() > 0) {
+      for (Entry<String,String> customProp : customProps.entrySet()) {
+        if (customProp.getKey().startsWith(HOST_BALANCER_PREFIX)) {
+          if (customProp.getKey().equals(HOST_BALANCER_OOB_CHECK_KEY)
+              || customProp.getKey().equals(HOST_BALANCER_REGEX_USING_IPS_KEY)
+              || customProp.getKey().equals(HOST_BALANCER_REGEX_MAX_MIGRATIONS_KEY)
+              || customProp.getKey().equals(HOST_BALANCER_OUTSTANDING_MIGRATIONS_KEY)) {
+            continue;
+          }
+          String tableName = customProp.getKey().substring(HOST_BALANCER_PREFIX.length());
+          String regex = customProp.getValue();
+          regexes.put(tableName, regex);
+        }
+      }
+    }
+
+    return Map.copyOf(regexes);
+  }
+
+  /**
+   * Host Regex Table Load Balance Config
+   */
+  static class HrtlbConf {
+
+    protected long oobCheckMillis =
+        ConfigurationTypeHelper.getTimeInMillis(HOST_BALANCER_OOB_DEFAULT);
+    private int maxTServerMigrations = HOST_BALANCER_REGEX_MAX_MIGRATIONS_DEFAULT;
+    private int maxOutstandingMigrations = DEFAULT_OUTSTANDING_MIGRATIONS;
+    private boolean isIpBasedRegex = false;
+    private Map<String,String> regexes;
+    private Map<String,Pattern> poolNameToRegexPattern = null;
+
+    HrtlbConf(AccumuloConfiguration aconf) {
+      System.out.println("building hrtlb conf");
+      String oobProperty = aconf.get(HOST_BALANCER_OOB_CHECK_KEY);
+      if (oobProperty != null) {
+        oobCheckMillis = ConfigurationTypeHelper.getTimeInMillis(oobProperty);
+      }
+      String ipBased = aconf.get(HOST_BALANCER_REGEX_USING_IPS_KEY);
+      if (ipBased != null) {
+        isIpBasedRegex = Boolean.parseBoolean(ipBased);
+      }
+      String migrations = aconf.get(HOST_BALANCER_REGEX_MAX_MIGRATIONS_KEY);
+      if (migrations != null) {
+        maxTServerMigrations = Integer.parseInt(migrations);
+      }
+      String outstanding = aconf.get(HOST_BALANCER_OUTSTANDING_MIGRATIONS_KEY);
+      if (outstanding != null) {
+        maxOutstandingMigrations = Integer.parseInt(outstanding);
+      }
+
+      this.regexes = getRegexes(aconf);
+
+      Map<String,Pattern> poolNameToRegexPatternBuilder = new HashMap<>();
+      regexes.forEach((k, v) -> {
+        poolNameToRegexPatternBuilder.put(k, Pattern.compile(v));
+      });
+
+      poolNameToRegexPattern = Map.copyOf(poolNameToRegexPatternBuilder);
+    }
+  }
 
   private static final long ONE_HOUR = 60 * 60 * 1000;
   private static final Set<KeyExtent> EMPTY_MIGRATIONS = Collections.emptySet();
-
-  private volatile Map<TableId,String> tableIdToTableName = null;
-  private volatile Map<String,Pattern> poolNameToRegexPattern = null;
   private volatile long lastOOBCheck = System.currentTimeMillis();
-  private volatile boolean isIpBasedRegex = false;
   private Map<String,SortedMap<TServerInstance,TabletServerStatus>> pools = new HashMap<>();
-  private volatile int maxTServerMigrations = HOST_BALANCER_REGEX_MAX_MIGRATIONS_DEFAULT;
-  private volatile int maxOutstandingMigrations = DEFAULT_OUTSTANDING_MIGRATIONS;
   private final Map<KeyExtent,TabletMigration> migrationsFromLastPass = new HashMap<>();
   private final Map<String,Long> tableToTimeSinceNoMigrations = new HashMap<>();
 
+  private Deriver<HrtlbConf> hrtlbConf;
+  private LoadingCache<TableId,Deriver<Map<String,String>>> tablesRegExCache;
+
   /**
    * Group the set of current tservers by pool name. Tservers that don't match a regex are put into
    * a default pool. This could be expensive in the terms of the amount of time to recompute the
@@ -170,7 +234,7 @@
    */
   protected List<String> getPoolNamesForHost(String host) {
     String test = host;
-    if (!isIpBasedRegex) {
+    if (!hrtlbConf.derive().isIpBasedRegex) {
       try {
         test = getNameFromIp(host);
       } catch (UnknownHostException e1) {
@@ -180,7 +244,7 @@
       }
     }
     List<String> pools = new ArrayList<>();
-    for (Entry<String,Pattern> e : poolNameToRegexPattern.entrySet()) {
+    for (Entry<String,Pattern> e : hrtlbConf.derive().poolNameToRegexPattern.entrySet()) {
       if (e.getValue().matcher(test).matches()) {
         pools.add(e.getKey());
       }
@@ -195,6 +259,24 @@
     return InetAddress.getByName(hostIp).getHostName();
   }
 
+  private void checkTableConfig(TableId tableId) {
+    Map<String,String> tableRegexes = tablesRegExCache.getUnchecked(tableId).derive();
+
+    if (!hrtlbConf.derive().regexes.equals(tableRegexes)) {
+      LoggerFactory.getLogger(HostRegexTableLoadBalancer.class).warn(
+          "Table id {} has different config than system.  The per table config is ignored.",
+          tableId);
+    }
+  }
+
+  private Map<TableId,String> createdTableNameMap(Map<String,String> tableIdMap) {
+    HashMap<TableId,String> tableNameMap = new HashMap<>();
+    tableIdMap.forEach((tableName, tableId) -> {
+      tableNameMap.put(TableId.of(tableId), tableName);
+    });
+    return tableNameMap;
+  }
+
   /**
    * Matches table name against pool names, returns matching pool name or DEFAULT_POOL.
    *
@@ -206,106 +288,58 @@
     if (tableName == null) {
       return DEFAULT_POOL;
     }
-    return poolNameToRegexPattern.containsKey(tableName) ? tableName : DEFAULT_POOL;
-  }
-
-  /**
-   * Parse configuration and extract properties
-   *
-   * @param conf
-   *          server configuration
-   */
-  protected void parseConfiguration(ServerConfiguration conf) {
-    TableOperations t = getTableOperations();
-    if (t == null) {
-      throw new RuntimeException("Table Operations cannot be null");
-    }
-    Map<TableId,String> tableIdToTableNameBuilder = new HashMap<>();
-    Map<String,Pattern> poolNameToRegexPatternBuilder = new HashMap<>();
-    for (Entry<String,String> table : t.tableIdMap().entrySet()) {
-      TableId tableId = TableId.of(table.getValue());
-      tableIdToTableNameBuilder.put(tableId, table.getKey());
-      conf.getTableConfiguration(tableId).addObserver(this);
-      Map<String,String> customProps = conf.getTableConfiguration(tableId)
-          .getAllPropertiesWithPrefix(Property.TABLE_ARBITRARY_PROP_PREFIX);
-      if (customProps != null && customProps.size() > 0) {
-        for (Entry<String,String> customProp : customProps.entrySet()) {
-          if (customProp.getKey().startsWith(HOST_BALANCER_PREFIX)) {
-            if (customProp.getKey().equals(HOST_BALANCER_OOB_CHECK_KEY)
-                || customProp.getKey().equals(HOST_BALANCER_REGEX_USING_IPS_KEY)
-                || customProp.getKey().equals(HOST_BALANCER_REGEX_MAX_MIGRATIONS_KEY)
-                || customProp.getKey().equals(HOST_BALANCER_OUTSTANDING_MIGRATIONS_KEY)) {
-              continue;
-            }
-            String tableName = customProp.getKey().substring(HOST_BALANCER_PREFIX.length());
-            String regex = customProp.getValue();
-            poolNameToRegexPatternBuilder.put(tableName, Pattern.compile(regex));
-          }
-        }
-      }
-    }
-
-    tableIdToTableName = ImmutableMap.copyOf(tableIdToTableNameBuilder);
-    poolNameToRegexPattern = ImmutableMap.copyOf(poolNameToRegexPatternBuilder);
-
-    String oobProperty = conf.getSystemConfiguration().get(HOST_BALANCER_OOB_CHECK_KEY);
-    if (oobProperty != null) {
-      oobCheckMillis = ConfigurationTypeHelper.getTimeInMillis(oobProperty);
-    }
-    String ipBased = conf.getSystemConfiguration().get(HOST_BALANCER_REGEX_USING_IPS_KEY);
-    if (ipBased != null) {
-      isIpBasedRegex = Boolean.parseBoolean(ipBased);
-    }
-    String migrations = conf.getSystemConfiguration().get(HOST_BALANCER_REGEX_MAX_MIGRATIONS_KEY);
-    if (migrations != null) {
-      maxTServerMigrations = Integer.parseInt(migrations);
-    }
-    String outstanding =
-        conf.getSystemConfiguration().get(HOST_BALANCER_OUTSTANDING_MIGRATIONS_KEY);
-    if (outstanding != null) {
-      this.maxOutstandingMigrations = Integer.parseInt(outstanding);
-    }
-    LOG.info("{}", this);
+    return hrtlbConf.derive().poolNameToRegexPattern.containsKey(tableName) ? tableName
+        : DEFAULT_POOL;
   }
 
   @Override
   public String toString() {
+    HrtlbConf myConf = hrtlbConf.derive();
     ToStringBuilder buf = new ToStringBuilder(this, ToStringStyle.SHORT_PREFIX_STYLE);
-    buf.append("\nTablet Out Of Bounds Check Interval", this.oobCheckMillis);
-    buf.append("\nMax Tablet Server Migrations", this.maxTServerMigrations);
-    buf.append("\nRegular Expressions use IPs", this.isIpBasedRegex);
-    buf.append("\nPools", this.poolNameToRegexPattern);
+    buf.append("\nTablet Out Of Bounds Check Interval", myConf.oobCheckMillis);
+    buf.append("\nMax Tablet Server Migrations", myConf.maxTServerMigrations);
+    buf.append("\nRegular Expressions use IPs", myConf.isIpBasedRegex);
+    buf.append("\nPools", myConf.poolNameToRegexPattern);
     return buf.toString();
   }
 
-  public Map<TableId,String> getTableIdToTableName() {
-    return tableIdToTableName;
-  }
-
   public Map<String,Pattern> getPoolNameToRegexPattern() {
-    return poolNameToRegexPattern;
+    return hrtlbConf.derive().poolNameToRegexPattern;
   }
 
   public int getMaxMigrations() {
-    return maxTServerMigrations;
+    return hrtlbConf.derive().maxTServerMigrations;
   }
 
   public int getMaxOutstandingMigrations() {
-    return maxOutstandingMigrations;
+    return hrtlbConf.derive().maxOutstandingMigrations;
   }
 
   public long getOobCheckMillis() {
-    return oobCheckMillis;
+    return hrtlbConf.derive().oobCheckMillis;
   }
 
   public boolean isIpBasedRegex() {
-    return isIpBasedRegex;
+    return hrtlbConf.derive().isIpBasedRegex;
   }
 
   @Override
   public void init(ServerContext context) {
     super.init(context);
-    parseConfiguration(context.getServerConfFactory());
+
+    this.hrtlbConf =
+        context.getServerConfFactory().getSystemConfiguration().newDeriver(HrtlbConf::new);
+
+    tablesRegExCache = CacheBuilder.newBuilder().expireAfterAccess(1, TimeUnit.HOURS)
+        .build(new CacheLoader<TableId,Deriver<Map<String,String>>>() {
+          @Override
+          public Deriver<Map<String,String>> load(TableId key) throws Exception {
+            return context.getServerConfFactory().getTableConfiguration(key)
+                .newDeriver(conf -> getRegexes(conf));
+          }
+        });
+
+    LOG.info("{}", this);
   }
 
   @Override
@@ -324,6 +358,9 @@
       }
       tableUnassigned.put(e.getKey(), e.getValue());
     }
+
+    Map<TableId,String> tableIdToTableName = createdTableNameMap(getTableOperations().tableIdMap());
+
     // Send a view of the current servers to the tables tablet balancer
     for (Entry<TableId,Map<KeyExtent,TServerInstance>> e : groupedUnassigned.entrySet()) {
       Map<KeyExtent,TServerInstance> newAssignments = new HashMap<>();
@@ -353,18 +390,24 @@
     long minBalanceTime = 20 * 1000;
     // Iterate over the tables and balance each of them
     TableOperations t = getTableOperations();
-    if (t == null)
+    if (t == null) {
       return minBalanceTime;
+    }
 
     Map<String,String> tableIdMap = t.tableIdMap();
+    Map<TableId,String> tableIdToTableName = createdTableNameMap(tableIdMap);
+    tableIdToTableName.keySet().forEach(tid -> checkTableConfig(tid));
+
     long now = System.currentTimeMillis();
 
+    HrtlbConf myConf = hrtlbConf.derive();
+
     Map<String,SortedMap<TServerInstance,TabletServerStatus>> currentGrouped =
         splitCurrentByRegex(current);
-    if ((now - this.lastOOBCheck) > this.oobCheckMillis) {
+    if ((now - this.lastOOBCheck) > myConf.oobCheckMillis) {
       try {
         // Check to see if a tablet is assigned outside the bounds of the pool. If so, migrate it.
-        for (String table : t.list()) {
+        for (String table : tableIdMap.keySet()) {
           LOG.debug("Checking for out of bounds tablets for table {}", table);
           String tablePoolName = getPoolNameForTable(table);
           for (Entry<TServerInstance,TabletServerStatus> e : current.entrySet()) {
@@ -406,7 +449,7 @@
                   LOG.info("Tablet {} is currently outside the bounds of the"
                       + " regex, migrating from {} to {}", ke, e.getKey(), nextTS);
                   migrationsOut.add(new TabletMigration(ke, e.getKey(), nextTS));
-                  if (migrationsOut.size() >= this.maxTServerMigrations) {
+                  if (migrationsOut.size() >= myConf.maxTServerMigrations) {
                     break;
                   }
                 } else {
@@ -433,7 +476,7 @@
     }
 
     if (migrations != null && migrations.size() > 0) {
-      if (migrations.size() >= maxOutstandingMigrations) {
+      if (migrations.size() >= myConf.maxOutstandingMigrations) {
         LOG.warn("Not balancing tables due to {} outstanding migrations", migrations.size());
         if (LOG.isTraceEnabled()) {
           LOG.trace("Sample up to 10 outstanding migrations: {}", Iterables.limit(migrations, 10));
@@ -490,7 +533,7 @@
       }
 
       migrationsOut.addAll(newMigrations);
-      if (migrationsOut.size() >= this.maxTServerMigrations) {
+      if (migrationsOut.size() >= myConf.maxTServerMigrations) {
         break;
       }
     }
@@ -530,18 +573,4 @@
     }
     return newInfo;
   }
-
-  @Override
-  public void propertyChanged(String key) {
-    parseConfiguration(context.getServerConfFactory());
-  }
-
-  @Override
-  public void propertiesChanged() {
-    parseConfiguration(context.getServerConfFactory());
-  }
-
-  @Override
-  public void sessionExpired() {}
-
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/Assignment.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/Assignment.java
index 1495068..f5ed6db 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/Assignment.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/Assignment.java
@@ -17,8 +17,10 @@
 package org.apache.accumulo.server.master.state;
 
 import org.apache.accumulo.core.dataImpl.KeyExtent;
+import org.apache.accumulo.core.metadata.schema.Ample;
+import org.apache.accumulo.core.util.HostAndPort;
 
-public class Assignment {
+public class Assignment implements Ample.TServer {
   public KeyExtent tablet;
   public TServerInstance server;
 
@@ -26,4 +28,14 @@
     this.tablet = tablet;
     this.server = server;
   }
+
+  @Override
+  public HostAndPort getLocation() {
+    return server.getLocation();
+  }
+
+  @Override
+  public String getSession() {
+    return server.getSession();
+  }
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/DistributedStore.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/DistributedStore.java
deleted file mode 100644
index 275c9d2..0000000
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/DistributedStore.java
+++ /dev/null
@@ -1,34 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.server.master.state;
-
-import java.util.List;
-
-/**
- * An abstract version of ZooKeeper that we can write tests against.
- */
-public interface DistributedStore {
-
-  List<String> getChildren(String path) throws DistributedStoreException;
-
-  byte[] get(String path) throws DistributedStoreException;
-
-  void put(String path, byte[] bs) throws DistributedStoreException;
-
-  void remove(String path) throws DistributedStoreException;
-
-}
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/MetaDataTableScanner.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/MetaDataTableScanner.java
index f718790..f1c4bdb 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/MetaDataTableScanner.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/MetaDataTableScanner.java
@@ -17,6 +17,7 @@
 package org.apache.accumulo.server.master.state;
 
 import java.io.IOException;
+import java.lang.ref.Cleaner.Cleanable;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
@@ -24,22 +25,25 @@
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map.Entry;
+import java.util.NoSuchElementException;
 import java.util.SortedMap;
+import java.util.concurrent.atomic.AtomicBoolean;
 
 import org.apache.accumulo.core.client.BatchScanner;
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.ScannerBase;
+import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.clientImpl.ClientContext;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.dataImpl.KeyExtent;
 import org.apache.accumulo.core.iterators.user.WholeRowIterator;
-import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ChoppedColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LogColumnFamily;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.util.cleaner.CleanerUtil;
 import org.apache.accumulo.server.master.state.TabletLocationState.BadLocationStateException;
 import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
@@ -48,28 +52,23 @@
 public class MetaDataTableScanner implements ClosableIterator<TabletLocationState> {
   private static final Logger log = LoggerFactory.getLogger(MetaDataTableScanner.class);
 
-  BatchScanner mdScanner = null;
-  Iterator<Entry<Key,Value>> iter = null;
-
-  public MetaDataTableScanner(ClientContext context, Range range, CurrentState state) {
-    this(context, range, state, MetadataTable.NAME);
-  }
+  private final Cleanable cleanable;
+  private final BatchScanner mdScanner;
+  private final Iterator<Entry<Key,Value>> iter;
+  private final AtomicBoolean closed = new AtomicBoolean(false);
 
   MetaDataTableScanner(ClientContext context, Range range, CurrentState state, String tableName) {
     // scan over metadata table, looking for tablets in the wrong state based on the live servers
     // and online tables
     try {
       mdScanner = context.createBatchScanner(tableName, Authorizations.EMPTY, 8);
-      configureScanner(mdScanner, state);
-      mdScanner.setRanges(Collections.singletonList(range));
-      iter = mdScanner.iterator();
-    } catch (Exception ex) {
-      if (mdScanner != null)
-        mdScanner.close();
-      iter = null;
-      mdScanner = null;
-      throw new RuntimeException(ex);
+    } catch (TableNotFoundException e) {
+      throw new IllegalStateException("Metadata table " + tableName + " should exist", e);
     }
+    cleanable = CleanerUtil.unclosed(this, MetaDataTableScanner.class, closed, log, mdScanner);
+    configureScanner(mdScanner, state);
+    mdScanner.setRanges(Collections.singletonList(range));
+    iter = mdScanner.iterator();
   }
 
   public static void configureScanner(ScannerBase scanner, CurrentState state) {
@@ -95,31 +94,25 @@
     scanner.addScanIterator(tabletChange);
   }
 
-  public MetaDataTableScanner(ClientContext context, Range range) {
-    this(context, range, MetadataTable.NAME);
-  }
-
   public MetaDataTableScanner(ClientContext context, Range range, String tableName) {
     this(context, range, null, tableName);
   }
 
   @Override
   public void close() {
-    if (iter != null) {
+    if (closed.compareAndSet(false, true)) {
+      // deregister cleanable, but it won't run because it checks
+      // the value of closed first, which is now true
+      cleanable.clean();
       mdScanner.close();
-      iter = null;
     }
   }
 
   @Override
-  protected void finalize() {
-    close();
-  }
-
-  @Override
   public boolean hasNext() {
-    if (iter == null)
+    if (closed.get()) {
       return false;
+    }
     boolean result = iter.hasNext();
     if (!result) {
       close();
@@ -129,7 +122,15 @@
 
   @Override
   public TabletLocationState next() {
-    return fetch();
+    if (closed.get()) {
+      throw new NoSuchElementException(this.getClass().getSimpleName() + " is closed");
+    }
+    try {
+      Entry<Key,Value> e = iter.next();
+      return createTabletLocationState(e.getKey(), e.getValue());
+    } catch (IOException | BadLocationStateException ex) {
+      throw new RuntimeException(ex);
+    }
   }
 
   public static TabletLocationState createTabletLocationState(Key k, Value v)
@@ -187,17 +188,4 @@
     return new TabletLocationState(extent, future, current, last, suspend, walogs, chopped);
   }
 
-  private TabletLocationState fetch() {
-    try {
-      Entry<Key,Value> e = iter.next();
-      return createTabletLocationState(e.getKey(), e.getValue());
-    } catch (IOException | BadLocationStateException ex) {
-      throw new RuntimeException(ex);
-    }
-  }
-
-  @Override
-  public void remove() {
-    throw new RuntimeException("Unimplemented");
-  }
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/TServerInstance.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/TServerInstance.java
index 45f13d3..c36748a 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/TServerInstance.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/TServerInstance.java
@@ -25,6 +25,7 @@
 
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.metadata.schema.Ample;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
 import org.apache.accumulo.core.metadata.schema.TabletMetadata.Location;
 import org.apache.accumulo.core.util.AddressUtil;
@@ -37,7 +38,7 @@
  * Therefore tablet assignments can be considered out-of-date if the tablet server instance
  * information has been changed.
  */
-public class TServerInstance implements Comparable<TServerInstance>, Serializable {
+public class TServerInstance implements Ample.TServer, Comparable<TServerInstance>, Serializable {
 
   private static final long serialVersionUID = 1L;
 
@@ -143,10 +144,12 @@
     return new Value(getLocation().toString().getBytes(UTF_8));
   }
 
+  @Override
   public HostAndPort getLocation() {
     return location;
   }
 
+  @Override
   public String getSession() {
     return session;
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateStore.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateStore.java
index 4b67163..840b5e3 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateStore.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateStore.java
@@ -102,7 +102,7 @@
 
   protected static TabletStateStore getStoreForTablet(KeyExtent extent, ServerContext context) {
     if (extent.isRootTablet()) {
-      return new ZooTabletStateStore(context);
+      return new ZooTabletStateStore(context.getAmple());
     } else if (extent.isMeta()) {
       return new RootTabletStateStore(context);
     } else {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/ZooStore.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/ZooStore.java
deleted file mode 100644
index 73bcdd6..0000000
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/ZooStore.java
+++ /dev/null
@@ -1,95 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.server.master.state;
-
-import static java.nio.charset.StandardCharsets.UTF_8;
-
-import java.util.List;
-
-import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
-import org.apache.accumulo.fate.zookeeper.ZooCache;
-import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
-import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeMissingPolicy;
-import org.apache.accumulo.server.ServerContext;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class ZooStore implements DistributedStore {
-
-  private static final Logger log = LoggerFactory.getLogger(ZooStore.class);
-
-  private ServerContext context;
-  private String basePath;
-  private ZooCache cache;
-
-  public ZooStore(ServerContext context) {
-    this.context = context;
-    cache = new ZooCache(context.getZooReaderWriter(), null);
-    String zkRoot = context.getZooKeeperRoot();
-    if (zkRoot.endsWith("/"))
-      zkRoot = zkRoot.substring(0, zkRoot.length() - 1);
-    this.basePath = zkRoot;
-  }
-
-  @Override
-  public byte[] get(String path) throws DistributedStoreException {
-    try {
-      return cache.get(relative(path));
-    } catch (Exception ex) {
-      throw new DistributedStoreException(ex);
-    }
-  }
-
-  private String relative(String path) {
-    return basePath + path;
-  }
-
-  @Override
-  public List<String> getChildren(String path) throws DistributedStoreException {
-    try {
-      return cache.getChildren(relative(path));
-    } catch (Exception ex) {
-      throw new DistributedStoreException(ex);
-    }
-  }
-
-  @Override
-  public void put(String path, byte[] bs) throws DistributedStoreException {
-    try {
-      path = relative(path);
-      context.getZooReaderWriter().putPersistentData(path, bs, NodeExistsPolicy.OVERWRITE);
-      cache.clear();
-      log.debug("Wrote {} to {}", new String(bs, UTF_8), path);
-    } catch (Exception ex) {
-      throw new DistributedStoreException(ex);
-    }
-  }
-
-  @Override
-  public void remove(String path) throws DistributedStoreException {
-    try {
-      log.debug("Removing {}", path);
-      path = relative(path);
-      IZooReaderWriter zoo = context.getZooReaderWriter();
-      if (zoo.exists(path))
-        zoo.recursiveDelete(path, NodeMissingPolicy.SKIP);
-      cache.clear();
-    } catch (Exception ex) {
-      throw new DistributedStoreException(ex);
-    }
-  }
-}
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/ZooTabletStateStore.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/ZooTabletStateStore.java
index a8b84ab..6b9b188 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/ZooTabletStateStore.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/ZooTabletStateStore.java
@@ -16,20 +16,19 @@
  */
 package org.apache.accumulo.server.master.state;
 
-import static java.nio.charset.StandardCharsets.UTF_8;
-
-import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
-import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 
 import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.Ample;
+import org.apache.accumulo.core.metadata.schema.Ample.TabletMutator;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.Location;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.LocationType;
 import org.apache.accumulo.core.tabletserver.log.LogEntry;
-import org.apache.accumulo.core.util.HostAndPort;
-import org.apache.accumulo.server.ServerContext;
 import org.apache.hadoop.fs.Path;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -37,18 +36,15 @@
 public class ZooTabletStateStore extends TabletStateStore {
 
   private static final Logger log = LoggerFactory.getLogger(ZooTabletStateStore.class);
-  private final DistributedStore store;
+  private final Ample ample;
 
-  public ZooTabletStateStore(DistributedStore store) {
-    this.store = store;
-  }
-
-  public ZooTabletStateStore(ServerContext context) {
-    store = new ZooStore(context);
+  public ZooTabletStateStore(Ample ample) {
+    this.ample = ample;
   }
 
   @Override
   public ClosableIterator<TabletLocationState> iterator() {
+
     return new ClosableIterator<TabletLocationState>() {
       boolean finished = false;
 
@@ -61,33 +57,31 @@
       public TabletLocationState next() {
         finished = true;
         try {
-          byte[] future = store.get(RootTable.ZROOT_TABLET_FUTURE_LOCATION);
-          byte[] current = store.get(RootTable.ZROOT_TABLET_LOCATION);
-          byte[] last = store.get(RootTable.ZROOT_TABLET_LAST_LOCATION);
+
+          TabletMetadata rootMeta = ample.readTablet(RootTable.EXTENT);
 
           TServerInstance currentSession = null;
           TServerInstance futureSession = null;
           TServerInstance lastSession = null;
 
-          if (future != null)
-            futureSession = parse(future);
+          Location loc = rootMeta.getLocation();
 
-          if (last != null)
-            lastSession = parse(last);
+          if (loc != null && loc.getType() == LocationType.FUTURE)
+            futureSession = new TServerInstance(loc);
 
-          if (current != null) {
-            currentSession = parse(current);
-            futureSession = null;
+          if (rootMeta.getLast() != null)
+            lastSession = new TServerInstance(rootMeta.getLast());
+
+          if (loc != null && loc.getType() == LocationType.CURRENT) {
+            currentSession = new TServerInstance(loc);
           }
+
           List<Collection<String>> logs = new ArrayList<>();
-          for (String entry : store.getChildren(RootTable.ZROOT_TABLET_WALOGS)) {
-            byte[] logInfo = store.get(RootTable.ZROOT_TABLET_WALOGS + "/" + entry);
-            if (logInfo != null) {
-              LogEntry logEntry = LogEntry.fromBytes(logInfo);
-              logs.add(Collections.singleton(logEntry.filename));
-              log.debug("root tablet log {}", logEntry.filename);
-            }
-          }
+          rootMeta.getLogs().forEach(logEntry -> {
+            logs.add(Collections.singleton(logEntry.filename));
+            log.debug("root tablet log {}", logEntry.filename);
+          });
+
           TabletLocationState result = new TabletLocationState(RootTable.EXTENT, futureSession,
               currentSession, lastSession, null, logs, false);
           log.debug("Returning root tablet state: {}", result);
@@ -107,18 +101,6 @@
     };
   }
 
-  protected TServerInstance parse(byte[] current) {
-    String str = new String(current, UTF_8);
-    String[] parts = str.split("[|]", 2);
-    HostAndPort address = HostAndPort.fromString(parts[0]);
-    if (parts.length > 1 && parts[1] != null && parts[1].length() > 0) {
-      return new TServerInstance(address, parts[1]);
-    } else {
-      // a 1.2 location specification: DO NOT WANT
-      return null;
-    }
-  }
-
   @Override
   public void setFutureLocations(Collection<Assignment> assignments)
       throws DistributedStoreException {
@@ -127,14 +109,10 @@
     Assignment assignment = assignments.iterator().next();
     if (assignment.tablet.compareTo(RootTable.EXTENT) != 0)
       throw new IllegalArgumentException("You can only store the root tablet location");
-    String value = assignment.server.getLocation() + "|" + assignment.server.getSession();
-    Iterator<TabletLocationState> currentIter = iterator();
-    TabletLocationState current = currentIter.next();
-    if (current.current != null) {
-      throw new DistributedStoreException(
-          "Trying to set the root tablet location: it is already set to " + current.current);
-    }
-    store.put(RootTable.ZROOT_TABLET_FUTURE_LOCATION, value.getBytes(UTF_8));
+
+    TabletMutator tabletMutator = ample.mutateTablet(assignment.tablet);
+    tabletMutator.putLocation(assignment, LocationType.FUTURE);
+    tabletMutator.mutate();
   }
 
   @Override
@@ -144,21 +122,12 @@
     Assignment assignment = assignments.iterator().next();
     if (assignment.tablet.compareTo(RootTable.EXTENT) != 0)
       throw new IllegalArgumentException("You can only store the root tablet location");
-    String value = assignment.server.getLocation() + "|" + assignment.server.getSession();
-    Iterator<TabletLocationState> currentIter = iterator();
-    TabletLocationState current = currentIter.next();
-    if (current.current != null) {
-      throw new DistributedStoreException(
-          "Trying to set the root tablet location: it is already set to " + current.current);
-    }
-    if (!current.future.equals(assignment.server)) {
-      throw new DistributedStoreException("Root tablet is already assigned to " + current.future);
-    }
-    store.put(RootTable.ZROOT_TABLET_LOCATION, value.getBytes(UTF_8));
-    store.put(RootTable.ZROOT_TABLET_LAST_LOCATION, value.getBytes(UTF_8));
-    // Make the following unnecessary by making the entire update atomic
-    store.remove(RootTable.ZROOT_TABLET_FUTURE_LOCATION);
-    log.debug("Put down root tablet location");
+
+    TabletMutator tabletMutator = ample.mutateTablet(assignment.tablet);
+    tabletMutator.putLocation(assignment, LocationType.CURRENT);
+    tabletMutator.deleteLocation(assignment, LocationType.FUTURE);
+
+    tabletMutator.mutate();
   }
 
   @Override
@@ -169,24 +138,24 @@
     TabletLocationState tls = tablets.iterator().next();
     if (tls.extent.compareTo(RootTable.EXTENT) != 0)
       throw new IllegalArgumentException("You can only store the root tablet location");
+
+    TabletMutator tabletMutator = ample.mutateTablet(tls.extent);
+
+    tabletMutator.deleteLocation(tls.futureOrCurrent(), LocationType.FUTURE);
+    tabletMutator.deleteLocation(tls.futureOrCurrent(), LocationType.CURRENT);
     if (logsForDeadServers != null) {
       List<Path> logs = logsForDeadServers.get(tls.futureOrCurrent());
       if (logs != null) {
         for (Path entry : logs) {
           LogEntry logEntry = new LogEntry(RootTable.EXTENT, System.currentTimeMillis(),
               tls.futureOrCurrent().getLocation().toString(), entry.toString());
-          byte[] value;
-          try {
-            value = logEntry.toBytes();
-          } catch (IOException ex) {
-            throw new DistributedStoreException(ex);
-          }
-          store.put(RootTable.ZROOT_TABLET_WALOGS + "/" + logEntry.getUniqueID(), value);
+          tabletMutator.putWal(logEntry);
         }
       }
     }
-    store.remove(RootTable.ZROOT_TABLET_LOCATION);
-    store.remove(RootTable.ZROOT_TABLET_FUTURE_LOCATION);
+
+    tabletMutator.mutate();
+
     log.debug("unassign root tablet location");
   }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/metadata/RootGcCandidates.java b/server/base/src/main/java/org/apache/accumulo/server/metadata/RootGcCandidates.java
new file mode 100644
index 0000000..decde0c
--- /dev/null
+++ b/server/base/src/main/java/org/apache/accumulo/server/metadata/RootGcCandidates.java
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.server.metadata;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+import java.util.Collection;
+import java.util.SortedMap;
+import java.util.SortedSet;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.stream.Stream;
+
+import org.apache.accumulo.core.metadata.schema.Ample.FileMeta;
+import org.apache.hadoop.fs.Path;
+
+import com.google.common.base.Preconditions;
+import com.google.gson.Gson;
+import com.google.gson.GsonBuilder;
+
+public class RootGcCandidates {
+  private static final Gson GSON = new GsonBuilder().setPrettyPrinting().create();
+
+  // This class is used to serialize and deserialize root tablet metadata using GSon. Any changes to
+  // this class must consider persisted data.
+  private static class GSonData {
+    int version = 1;
+
+    // SortedMap<dir path, SortedSet<file name>>
+    SortedMap<String,SortedSet<String>> candidates;
+  }
+
+  /*
+   * The root tablet will only have a single dir on each volume. Therefore root file paths will have
+   * a small set of unique prefixes. The following map is structured to avoid storing the same dir
+   * prefix over and over in JSon and java.
+   *
+   * SortedMap<dir path, SortedSet<file name>>
+   */
+  private SortedMap<String,SortedSet<String>> candidates;
+
+  public RootGcCandidates() {
+    this.candidates = new TreeMap<>();
+  }
+
+  private RootGcCandidates(SortedMap<String,SortedSet<String>> candidates) {
+    this.candidates = candidates;
+  }
+
+  public void add(Collection<? extends FileMeta> refs) {
+    refs.forEach(ref -> {
+      Path path = ref.path();
+
+      String parent = path.getParent().toString();
+      String name = path.getName();
+
+      candidates.computeIfAbsent(parent, k -> new TreeSet<>()).add(name);
+    });
+  }
+
+  public void remove(Collection<String> refs) {
+    refs.forEach(ref -> {
+      Path path = new Path(ref);
+      String parent = path.getParent().toString();
+      String name = path.getName();
+
+      SortedSet<String> names = candidates.get(parent);
+      if (names != null) {
+        names.remove(name);
+        if (names.isEmpty()) {
+          candidates.remove(parent);
+        }
+      }
+    });
+  }
+
+  public Stream<String> stream() {
+    return candidates.entrySet().stream().flatMap(entry -> {
+      String parent = entry.getKey();
+      SortedSet<String> names = entry.getValue();
+      return names.stream().map(name -> new Path(parent, name).toString());
+    });
+  }
+
+  public String toJson() {
+    GSonData gd = new GSonData();
+    gd.candidates = candidates;
+    return GSON.toJson(gd);
+  }
+
+  public static RootGcCandidates fromJson(String json) {
+    GSonData gd = GSON.fromJson(json, GSonData.class);
+
+    Preconditions.checkArgument(gd.version == 1);
+
+    return new RootGcCandidates(gd.candidates);
+  }
+
+  public static RootGcCandidates fromJson(byte[] json) {
+    return fromJson(new String(json, UTF_8));
+  }
+}
diff --git a/server/base/src/main/java/org/apache/accumulo/server/metadata/RootTabletMutatorImpl.java b/server/base/src/main/java/org/apache/accumulo/server/metadata/RootTabletMutatorImpl.java
new file mode 100644
index 0000000..4ba20f4
--- /dev/null
+++ b/server/base/src/main/java/org/apache/accumulo/server/metadata/RootTabletMutatorImpl.java
@@ -0,0 +1,121 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.server.metadata;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+import java.util.List;
+
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.dataImpl.KeyExtent;
+import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.Ample;
+import org.apache.accumulo.core.metadata.schema.RootTabletMetadata;
+import org.apache.accumulo.core.security.AuthorizationContainer;
+import org.apache.accumulo.fate.zookeeper.ZooUtil;
+import org.apache.accumulo.server.ServerContext;
+import org.apache.accumulo.server.constraints.MetadataConstraints;
+import org.apache.accumulo.server.constraints.SystemEnvironment;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class RootTabletMutatorImpl extends TabletMutatorBase implements Ample.TabletMutator {
+  private ServerContext context;
+
+  private static final Logger log = LoggerFactory.getLogger(RootTabletMutatorImpl.class);
+
+  private static class RootEnv implements SystemEnvironment {
+
+    private ServerContext ctx;
+
+    RootEnv(ServerContext ctx) {
+      this.ctx = ctx;
+    }
+
+    @Override
+    public KeyExtent getExtent() {
+      return RootTable.EXTENT;
+    }
+
+    @Override
+    public String getUser() {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public AuthorizationContainer getAuthorizationsContainer() {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public ServerContext getServerContext() {
+      return ctx;
+    }
+  }
+
+  RootTabletMutatorImpl(ServerContext context) {
+    super(context, RootTable.EXTENT);
+    this.context = context;
+  }
+
+  @Override
+  public void mutate() {
+
+    Mutation mutation = getMutation();
+
+    MetadataConstraints metaConstraint = new MetadataConstraints();
+    List<Short> violations = metaConstraint.check(new RootEnv(context), mutation);
+
+    if (violations != null && !violations.isEmpty()) {
+      throw new IllegalStateException(
+          "Mutation for root tablet metadata violated constraints : " + violations);
+    }
+
+    try {
+      String zpath = context.getZooKeeperRoot() + RootTable.ZROOT_TABLET;
+
+      context.getZooCache().clear(zpath);
+
+      // TODO examine implementation of getZooReaderWriter().mutate()
+      context.getZooReaderWriter().mutate(zpath, new byte[0], ZooUtil.PUBLIC, currVal -> {
+
+        String currJson = new String(currVal, UTF_8);
+
+        log.debug("Before mutating : {}, ", currJson);
+
+        RootTabletMetadata rtm = RootTabletMetadata.fromJson(currJson);
+
+        rtm.update(mutation);
+
+        String newJson = rtm.toJson();
+
+        log.debug("After mutating : {} ", newJson);
+
+        return newJson.getBytes(UTF_8);
+      });
+
+      // TODO this is racy...
+      context.getZooCache().clear(zpath);
+
+      if (closeAfterMutate != null)
+        closeAfterMutate.close();
+    } catch (Exception e) {
+      throw new RuntimeException(e);
+    }
+  }
+}
diff --git a/server/base/src/main/java/org/apache/accumulo/server/metadata/ServerAmpleImpl.java b/server/base/src/main/java/org/apache/accumulo/server/metadata/ServerAmpleImpl.java
new file mode 100644
index 0000000..dc8af4f
--- /dev/null
+++ b/server/base/src/main/java/org/apache/accumulo/server/metadata/ServerAmpleImpl.java
@@ -0,0 +1,202 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.server.metadata;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.apache.accumulo.core.metadata.RootTable.ZROOT_TABLET_GC_CANDIDATES;
+import static org.apache.accumulo.server.util.MetadataTableUtil.EMPTY_TEXT;
+
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.function.Consumer;
+import java.util.stream.Stream;
+import java.util.stream.StreamSupport;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.MutationsRejectedException;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.PartialKey;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.TableId;
+import org.apache.accumulo.core.dataImpl.KeyExtent;
+import org.apache.accumulo.core.metadata.MetadataTable;
+import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.Ample;
+import org.apache.accumulo.core.metadata.schema.AmpleImpl;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.DeletesSection;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.fate.zookeeper.ZooUtil;
+import org.apache.accumulo.server.ServerContext;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Text;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Preconditions;
+
+public class ServerAmpleImpl extends AmpleImpl implements Ample {
+
+  private static Logger log = LoggerFactory.getLogger(ServerAmpleImpl.class);
+
+  private ServerContext context;
+
+  public ServerAmpleImpl(ServerContext ctx) {
+    super(ctx);
+    this.context = ctx;
+  }
+
+  @Override
+  public Ample.TabletMutator mutateTablet(KeyExtent extent) {
+    TabletsMutator tmi = mutateTablets();
+    Ample.TabletMutator tabletMutator = tmi.mutateTablet(extent);
+    ((TabletMutatorBase) tabletMutator).setCloseAfterMutate(tmi);
+    return tabletMutator;
+  }
+
+  @Override
+  public TabletsMutator mutateTablets() {
+    return new TabletsMutatorImpl(context);
+  }
+
+  private void mutateRootGcCandidates(Consumer<RootGcCandidates> mutator) {
+    String zpath = context.getZooKeeperRoot() + ZROOT_TABLET_GC_CANDIDATES;
+    try {
+      context.getZooReaderWriter().mutate(zpath, new byte[0], ZooUtil.PUBLIC, currVal -> {
+        String currJson = new String(currVal, UTF_8);
+
+        RootGcCandidates rgcc = RootGcCandidates.fromJson(currJson);
+
+        log.debug("Root GC candidates before change : {}", currJson);
+
+        mutator.accept(rgcc);
+
+        String newJson = rgcc.toJson();
+
+        log.debug("Root GC candidates after change  : {}", newJson);
+
+        if (newJson.length() > 262_144) {
+          log.warn(
+              "Root tablet deletion candidates stored in ZK at {} are getting large ({} bytes), is"
+                  + " Accumulo GC process running?  Large nodes may cause problems for Zookeeper!",
+              zpath, newJson.length());
+        }
+
+        return newJson.getBytes(UTF_8);
+      });
+    } catch (Exception e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  @Override
+  public void putGcCandidates(TableId tableId, Collection<? extends Ample.FileMeta> candidates) {
+
+    if (RootTable.ID.equals(tableId)) {
+      mutateRootGcCandidates(rgcc -> rgcc.add(candidates));
+      return;
+    }
+
+    try (BatchWriter writer = createWriter(tableId)) {
+      for (Ample.FileMeta file : candidates) {
+        writer.addMutation(createDeleteMutation(context, tableId, file.path().toString()));
+      }
+    } catch (MutationsRejectedException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  @Override
+  public void deleteGcCandidates(DataLevel level, Collection<String> paths) {
+
+    if (level == DataLevel.ROOT) {
+      mutateRootGcCandidates(rgcc -> rgcc.remove(paths));
+      return;
+    }
+
+    try (BatchWriter writer = context.createBatchWriter(level.metaTable())) {
+      for (String path : paths) {
+        Mutation m = new Mutation(DeletesSection.encodeRow(path));
+        m.putDelete(EMPTY_TEXT, EMPTY_TEXT);
+        writer.addMutation(m);
+      }
+    } catch (MutationsRejectedException | TableNotFoundException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  public Iterator<String> getGcCandidates(DataLevel level, String continuePoint) {
+    if (level == DataLevel.ROOT) {
+      byte[] json = context.getZooCache()
+          .get(context.getZooKeeperRoot() + RootTable.ZROOT_TABLET_GC_CANDIDATES);
+      Stream<String> candidates = RootGcCandidates.fromJson(json).stream().sorted();
+
+      if (continuePoint != null && !continuePoint.isEmpty()) {
+        candidates = candidates.dropWhile(candidate -> candidate.compareTo(continuePoint) <= 0);
+      }
+
+      return candidates.iterator();
+    } else if (level == DataLevel.METADATA || level == DataLevel.USER) {
+      Range range = DeletesSection.getRange();
+      if (continuePoint != null && !continuePoint.isEmpty()) {
+        String continueRow = DeletesSection.encodeRow(continuePoint);
+        range = new Range(new Key(continueRow).followingKey(PartialKey.ROW), true,
+            range.getEndKey(), range.isEndKeyInclusive());
+      }
+
+      Scanner scanner;
+      try {
+        scanner = context.createScanner(level.metaTable(), Authorizations.EMPTY);
+      } catch (TableNotFoundException e) {
+        throw new RuntimeException(e);
+      }
+      scanner.setRange(range);
+      return StreamSupport.stream(scanner.spliterator(), false)
+          .filter(entry -> entry.getValue().equals(DeletesSection.SkewedKeyValue.NAME))
+          .map(entry -> DeletesSection.decodeRow(entry.getKey().getRow().toString())).iterator();
+    } else {
+      throw new IllegalArgumentException();
+    }
+  }
+
+  private BatchWriter createWriter(TableId tableId) {
+
+    Preconditions.checkArgument(!RootTable.ID.equals(tableId));
+
+    try {
+      if (MetadataTable.ID.equals(tableId)) {
+        return context.createBatchWriter(RootTable.NAME);
+      } else {
+        return context.createBatchWriter(MetadataTable.NAME);
+      }
+    } catch (TableNotFoundException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  public static Mutation createDeleteMutation(ServerContext context, TableId tableId,
+      String pathToRemove) {
+    Path path = context.getVolumeManager().getFullPath(tableId, pathToRemove);
+    Mutation delFlag = new Mutation(new Text(DeletesSection.encodeRow(path.toString())));
+    delFlag.put(EMPTY_TEXT, EMPTY_TEXT, DeletesSection.SkewedKeyValue.NAME);
+    return delFlag;
+  }
+
+}
diff --git a/server/base/src/main/java/org/apache/accumulo/server/metadata/TabletMutatorBase.java b/server/base/src/main/java/org/apache/accumulo/server/metadata/TabletMutatorBase.java
new file mode 100644
index 0000000..1eda116
--- /dev/null
+++ b/server/base/src/main/java/org/apache/accumulo/server/metadata/TabletMutatorBase.java
@@ -0,0 +1,204 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.server.metadata;
+
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.dataImpl.KeyExtent;
+import org.apache.accumulo.core.metadata.schema.Ample;
+import org.apache.accumulo.core.metadata.schema.DataFileValue;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.DataFileColumnFamily;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ScanFileColumnFamily;
+import org.apache.accumulo.core.metadata.schema.MetadataTime;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.LocationType;
+import org.apache.accumulo.core.tabletserver.log.LogEntry;
+import org.apache.accumulo.fate.FateTxId;
+import org.apache.accumulo.fate.zookeeper.ZooLock;
+import org.apache.accumulo.server.ServerContext;
+import org.apache.hadoop.io.Text;
+
+import com.google.common.base.Preconditions;
+
+public abstract class TabletMutatorBase implements Ample.TabletMutator {
+
+  private final ServerContext context;
+  private final KeyExtent extent;
+  private final Mutation mutation;
+  protected AutoCloseable closeAfterMutate;
+  private boolean updatesEnabled = true;
+
+  protected TabletMutatorBase(ServerContext ctx, KeyExtent extent) {
+    this.extent = extent;
+    this.context = ctx;
+    mutation = new Mutation(extent.getMetadataEntry());
+  }
+
+  @Override
+  public Ample.TabletMutator putPrevEndRow(Text per) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.put(mutation,
+        KeyExtent.encodePrevEndRow(extent.getPrevEndRow()));
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator putDir(String dir) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(mutation, new Value(dir));
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator putFile(Ample.FileMeta path, DataFileValue dfv) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    mutation.put(DataFileColumnFamily.NAME, path.meta(), new Value(dfv.encode()));
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator deleteFile(Ample.FileMeta path) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    mutation.putDelete(DataFileColumnFamily.NAME, path.meta());
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator putScan(Ample.FileMeta path) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    mutation.put(ScanFileColumnFamily.NAME, path.meta(), new Value(new byte[0]));
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator deleteScan(Ample.FileMeta path) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    mutation.putDelete(ScanFileColumnFamily.NAME, path.meta());
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator putCompactionId(long compactionId) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    TabletsSection.ServerColumnFamily.COMPACT_COLUMN.put(mutation,
+        new Value(Long.toString(compactionId)));
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator putFlushId(long flushId) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    TabletsSection.ServerColumnFamily.FLUSH_COLUMN.put(mutation, new Value(Long.toString(flushId)));
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator putTime(MetadataTime time) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    TabletsSection.ServerColumnFamily.TIME_COLUMN.put(mutation, new Value(time.encode()));
+    return this;
+  }
+
+  private String getLocationFamily(LocationType type) {
+    switch (type) {
+      case CURRENT:
+        return TabletsSection.CurrentLocationColumnFamily.STR_NAME;
+      case FUTURE:
+        return TabletsSection.FutureLocationColumnFamily.STR_NAME;
+      case LAST:
+        return TabletsSection.LastLocationColumnFamily.STR_NAME;
+      default:
+        throw new IllegalArgumentException();
+    }
+  }
+
+  @Override
+  public Ample.TabletMutator putLocation(Ample.TServer tsi, LocationType type) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    mutation.put(getLocationFamily(type), tsi.getSession(), tsi.getLocation().toString());
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator deleteLocation(Ample.TServer tsi, LocationType type) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    mutation.putDelete(getLocationFamily(type), tsi.getSession());
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator putZooLock(ZooLock zooLock) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    TabletsSection.ServerColumnFamily.LOCK_COLUMN.put(mutation,
+        new Value(zooLock.getLockID().serialize(context.getZooKeeperRoot() + "/")));
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator putWal(LogEntry logEntry) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    mutation.put(logEntry.getColumnFamily(), logEntry.getColumnQualifier(), logEntry.getValue());
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator deleteWal(LogEntry logEntry) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    mutation.putDelete(logEntry.getColumnFamily(), logEntry.getColumnQualifier());
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator deleteWal(String wal) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    mutation.putDelete(MetadataSchema.TabletsSection.LogColumnFamily.STR_NAME, wal);
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator putBulkFile(Ample.FileMeta bulkref, long tid) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    mutation.put(TabletsSection.BulkFileColumnFamily.NAME, bulkref.meta(),
+        new Value(FateTxId.formatTid(tid)));
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator deleteBulkFile(Ample.FileMeta bulkref) {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    mutation.putDelete(TabletsSection.BulkFileColumnFamily.NAME, bulkref.meta());
+    return this;
+  }
+
+  @Override
+  public Ample.TabletMutator putChopped() {
+    Preconditions.checkState(updatesEnabled, "Cannot make updates after calling mutate.");
+    TabletsSection.ChoppedColumnFamily.CHOPPED_COLUMN.put(mutation, new Value("chopped"));
+    return this;
+  }
+
+  protected Mutation getMutation() {
+    updatesEnabled = false;
+    return mutation;
+  }
+
+  public void setCloseAfterMutate(AutoCloseable closeable) {
+    this.closeAfterMutate = closeable;
+  }
+}
diff --git a/server/base/src/main/java/org/apache/accumulo/server/metadata/TabletMutatorImpl.java b/server/base/src/main/java/org/apache/accumulo/server/metadata/TabletMutatorImpl.java
new file mode 100644
index 0000000..4a94984
--- /dev/null
+++ b/server/base/src/main/java/org/apache/accumulo/server/metadata/TabletMutatorImpl.java
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.server.metadata;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.dataImpl.KeyExtent;
+import org.apache.accumulo.core.metadata.schema.Ample;
+import org.apache.accumulo.server.ServerContext;
+
+class TabletMutatorImpl extends TabletMutatorBase implements Ample.TabletMutator {
+
+  private BatchWriter writer;
+
+  TabletMutatorImpl(ServerContext context, KeyExtent extent, BatchWriter batchWriter) {
+    super(context, extent);
+    this.writer = batchWriter;
+  }
+
+  @Override
+  public void mutate() {
+    try {
+      writer.addMutation(getMutation());
+
+      if (closeAfterMutate != null)
+        closeAfterMutate.close();
+    } catch (Exception e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+}
diff --git a/server/base/src/main/java/org/apache/accumulo/server/metadata/TabletsMutatorImpl.java b/server/base/src/main/java/org/apache/accumulo/server/metadata/TabletsMutatorImpl.java
new file mode 100644
index 0000000..4ac5ba2
--- /dev/null
+++ b/server/base/src/main/java/org/apache/accumulo/server/metadata/TabletsMutatorImpl.java
@@ -0,0 +1,91 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.server.metadata;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.MutationsRejectedException;
+import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.data.TableId;
+import org.apache.accumulo.core.dataImpl.KeyExtent;
+import org.apache.accumulo.core.metadata.MetadataTable;
+import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.Ample;
+import org.apache.accumulo.core.metadata.schema.Ample.TabletsMutator;
+import org.apache.accumulo.server.ServerContext;
+
+import com.google.common.base.Preconditions;
+
+public class TabletsMutatorImpl implements TabletsMutator {
+
+  private ServerContext context;
+
+  private BatchWriter rootWriter;
+  private BatchWriter metaWriter;
+
+  public TabletsMutatorImpl(ServerContext context) {
+    this.context = context;
+  }
+
+  private BatchWriter getWriter(TableId tableId) {
+
+    Preconditions.checkArgument(!RootTable.ID.equals(tableId));
+
+    try {
+      if (MetadataTable.ID.equals(tableId)) {
+        if (rootWriter == null) {
+          rootWriter = context.createBatchWriter(RootTable.NAME);
+        }
+
+        return rootWriter;
+      } else {
+        if (metaWriter == null) {
+          metaWriter = context.createBatchWriter(MetadataTable.NAME);
+        }
+
+        return metaWriter;
+      }
+    } catch (TableNotFoundException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  @Override
+  public Ample.TabletMutator mutateTablet(KeyExtent extent) {
+    if (extent.isRootTablet()) {
+      return new RootTabletMutatorImpl(context);
+    } else {
+      return new TabletMutatorImpl(context, extent, getWriter(extent.getTableId()));
+    }
+  }
+
+  @Override
+  public void close() {
+    try {
+      if (rootWriter != null) {
+        rootWriter.close();
+      }
+
+      if (metaWriter != null) {
+        metaWriter.close();
+      }
+    } catch (MutationsRejectedException e) {
+      throw new RuntimeException(e);
+    }
+
+  }
+}
diff --git a/server/base/src/main/java/org/apache/accumulo/server/metrics/Metrics.java b/server/base/src/main/java/org/apache/accumulo/server/metrics/Metrics.java
index 72c94ae..c1ed9ed 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/metrics/Metrics.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/metrics/Metrics.java
@@ -16,7 +16,6 @@
  */
 package org.apache.accumulo.server.metrics;
 
-import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.metrics2.MetricsCollector;
 import org.apache.hadoop.metrics2.MetricsRecordBuilder;
 import org.apache.hadoop.metrics2.MetricsSource;
@@ -30,6 +29,20 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+/**
+ * Accumulo will search for a file named hadoop-metrics-accumulo.properties on the Accumulo
+ * classpath to configute the hadoop metrics2 system. The hadoop metrics system publishes to jmx and
+ * can be configured, via a configuration file, to publish to other metric collection systems
+ * (files,...)
+ * <p>
+ * A note on naming: The naming for jmx vs the hadoop metrics systems are slightly different. Hadoop
+ * metrics records will start with CONTEXT.RECORD, for example, accgc.AccGcCycleMetrics. The context
+ * parameter value is also used by the configuration file for sink configuration.
+ * <p>
+ * In JMX, the hierarchy is: Hadoop..Accumulo..[jmxName]..[processName]..attributes..[name]
+ * <p>
+ * For jvm metrics, the hierarchy is Hadoop..Accumulo..JvmMetrics..attributes..[name]
+ */
 public abstract class Metrics implements MetricsSource {
 
   private static String processName = "Unknown";
@@ -37,7 +50,7 @@
   public static MetricsSystem initSystem(String serviceName) {
     processName = serviceName;
     String serviceInstance = System.getProperty("accumulo.metrics.service.instance", "");
-    if (StringUtils.isNotBlank(serviceInstance)) {
+    if (!serviceInstance.isBlank()) {
       processName += serviceInstance;
     }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReports.java b/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReports.java
index b903cf2..3d8d483 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReports.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReports.java
@@ -85,28 +85,23 @@
       problemReports.put(pr, System.currentTimeMillis());
     }
 
-    Runnable r = new Runnable() {
+    Runnable r = () -> {
 
-      @Override
-      public void run() {
+      log.debug("Filing problem report {} {} {}", pr.getTableId(), pr.getProblemType(),
+          pr.getResource());
 
-        log.debug("Filing problem report {} {} {}", pr.getTableId(), pr.getProblemType(),
-            pr.getResource());
-
-        try {
-          if (isMeta(pr.getTableId())) {
-            // file report in zookeeper
-            pr.saveToZooKeeper(context);
-          } else {
-            // file report in metadata table
-            pr.saveToMetadataTable(context);
-          }
-        } catch (Exception e) {
-          log.error("Failed to file problem report " + pr.getTableId() + " " + pr.getProblemType()
-              + " " + pr.getResource(), e);
+      try {
+        if (isMeta(pr.getTableId())) {
+          // file report in zookeeper
+          pr.saveToZooKeeper(context);
+        } else {
+          // file report in metadata table
+          pr.saveToMetadataTable(context);
         }
+      } catch (Exception e) {
+        log.error("Failed to file problem report " + pr.getTableId() + " " + pr.getProblemType()
+            + " " + pr.getResource(), e);
       }
-
     };
 
     try {
@@ -128,22 +123,18 @@
   public void deleteProblemReport(TableId table, ProblemType pType, String resource) {
     final ProblemReport pr = new ProblemReport(table, pType, resource, null);
 
-    Runnable r = new Runnable() {
-
-      @Override
-      public void run() {
-        try {
-          if (isMeta(pr.getTableId())) {
-            // file report in zookeeper
-            pr.removeFromZooKeeper(context);
-          } else {
-            // file report in metadata table
-            pr.removeFromMetadataTable(context);
-          }
-        } catch (Exception e) {
-          log.error("Failed to delete problem report {} {} {}", pr.getTableId(),
-              pr.getProblemType(), pr.getResource(), e);
+    Runnable r = () -> {
+      try {
+        if (isMeta(pr.getTableId())) {
+          // file report in zookeeper
+          pr.removeFromZooKeeper(context);
+        } else {
+          // file report in metadata table
+          pr.removeFromMetadataTable(context);
         }
+      } catch (Exception e) {
+        log.error("Failed to delete problem report {} {} {}", pr.getTableId(), pr.getProblemType(),
+            pr.getResource(), e);
       }
     };
 
@@ -180,8 +171,9 @@
       delMut.putDelete(entry.getKey().getColumnFamily(), entry.getKey().getColumnQualifier());
     }
 
-    if (hasProblems)
+    if (hasProblems) {
       MetadataTableUtil.getMetadataTable(context).update(delMut);
+    }
   }
 
   private static boolean isMeta(TableId tableId) {
@@ -191,7 +183,7 @@
   public Iterator<ProblemReport> iterator(final TableId table) {
     try {
 
-      return new Iterator<ProblemReport>() {
+      return new Iterator<>() {
 
         IZooReaderWriter zoo = context.getZooReaderWriter();
         private int iter1Count = 0;
@@ -296,7 +288,7 @@
   }
 
   public static void main(String[] args) {
-    ServerContext context = new ServerContext(new SiteConfiguration());
+    var context = new ServerContext(SiteConfiguration.auto());
     getInstance(context).printProblems();
   }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/replication/ReplicaSystemFactory.java b/server/base/src/main/java/org/apache/accumulo/server/replication/ReplicaSystemFactory.java
index 493e8ee..7fe0732 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/replication/ReplicaSystemFactory.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/replication/ReplicaSystemFactory.java
@@ -41,7 +41,7 @@
       Class<?> clz = Class.forName(entry.getKey());
 
       if (ReplicaSystem.class.isAssignableFrom(clz)) {
-        Object o = clz.newInstance();
+        Object o = clz.getDeclaredConstructor().newInstance();
         ReplicaSystem rs = (ReplicaSystem) o;
         rs.configure(context, entry.getValue());
         return rs;
@@ -49,7 +49,7 @@
 
       throw new IllegalArgumentException(
           "Class is not assignable to ReplicaSystem: " + entry.getKey());
-    } catch (ClassNotFoundException | InstantiationException | IllegalAccessException e) {
+    } catch (ReflectiveOperationException e) {
       log.error("Error creating ReplicaSystem object", e);
       throw new IllegalArgumentException(e);
     }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java b/server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java
index 0ecdc0d..afca9c7 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java
@@ -27,7 +27,6 @@
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.commons.lang3.StringUtils;
 
 /**
  * When SASL is enabled, this parses properties from the site configuration to build up a set of all
@@ -181,10 +180,10 @@
 
     final String hostConfigString = conf.get(Property.INSTANCE_RPC_SASL_ALLOWED_HOST_IMPERSONATION);
     // Pull out the config values, defaulting to at least one value
-    final String[] userConfigs = userConfigString.trim().isEmpty() ? new String[] {""}
-        : StringUtils.split(userConfigString, ';');
-    final String[] hostConfigs = hostConfigString.trim().isEmpty() ? new String[] {""}
-        : StringUtils.split(hostConfigString, ';');
+    final String[] userConfigs =
+        userConfigString.trim().isEmpty() ? new String[] {""} : userConfigString.split(";");
+    final String[] hostConfigs =
+        hostConfigString.trim().isEmpty() ? new String[] {""} : hostConfigString.split(";");
 
     if (userConfigs.length != hostConfigs.length) {
       String msg = String.format("Should have equal number of user and host"
@@ -197,7 +196,7 @@
       final String userConfig = userConfigs[i];
       final String hostConfig = hostConfigs[i];
 
-      final String[] splitUserConfig = StringUtils.split(userConfig, ':');
+      final String[] splitUserConfig = userConfig.split(":");
       if (splitUserConfig.length != 2) {
         throw new IllegalArgumentException(
             "Expect a single colon-separated pair, but found '" + userConfig + "'");
@@ -212,7 +211,7 @@
       if (ALL.equals(allowedImpersonationsForRemoteUser)) {
         usersWithHosts.setAcceptAllUsers(true);
       } else {
-        String[] allowedUsers = StringUtils.split(allowedImpersonationsForRemoteUser, ",");
+        String[] allowedUsers = allowedImpersonationsForRemoteUser.split(",");
         Set<String> usersSet = new HashSet<>();
         usersSet.addAll(Arrays.asList(allowedUsers));
         usersWithHosts.setUsers(usersSet);
@@ -221,7 +220,7 @@
       if (ALL.equals(hostConfig)) {
         usersWithHosts.setAcceptAllHosts(true);
       } else {
-        String[] allowedHosts = StringUtils.split(hostConfig, ",");
+        String[] allowedHosts = hostConfig.split(",");
         Set<String> hostsSet = new HashSet<>();
         hostsSet.addAll(Arrays.asList(allowedHosts));
         usersWithHosts.setHosts(hostsSet);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/tables/TableManager.java b/server/base/src/main/java/org/apache/accumulo/server/tables/TableManager.java
index 3796aa3..1f96f9c 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/tables/TableManager.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/tables/TableManager.java
@@ -41,7 +41,6 @@
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeMissingPolicy;
 import org.apache.accumulo.server.ServerContext;
 import org.apache.accumulo.server.util.TablePropUtil;
-import org.apache.commons.lang3.StringUtils;
 import org.apache.zookeeper.KeeperException;
 import org.apache.zookeeper.WatchedEvent;
 import org.apache.zookeeper.Watcher;
@@ -128,7 +127,7 @@
       this.oldState = oldState;
       this.newState = newState;
 
-      if (StringUtils.isNotEmpty(message))
+      if (message != null && !message.isEmpty())
         this.message = message;
       else {
         this.message = "Error transitioning from " + oldState + " state to " + newState + " state";
diff --git a/server/base/src/main/java/org/apache/accumulo/server/tablets/TabletTime.java b/server/base/src/main/java/org/apache/accumulo/server/tablets/TabletTime.java
index dc5268b..23cbce7 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/tablets/TabletTime.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/tablets/TabletTime.java
@@ -21,29 +21,17 @@
 
 import org.apache.accumulo.core.client.admin.TimeType;
 import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.metadata.schema.MetadataTime;
 import org.apache.accumulo.server.data.ServerMutation;
 import org.apache.accumulo.server.util.time.RelativeTime;
 
 public abstract class TabletTime {
-  public static final char LOGICAL_TIME_ID = 'L';
-  public static final char MILLIS_TIME_ID = 'M';
-
-  public static char getTimeID(TimeType timeType) {
-    switch (timeType) {
-      case LOGICAL:
-        return LOGICAL_TIME_ID;
-      case MILLIS:
-        return MILLIS_TIME_ID;
-    }
-
-    throw new IllegalArgumentException("Unknown time type " + timeType);
-  }
 
   public abstract void useMaxTimeFromWALog(long time);
 
-  public abstract String getMetadataValue(long time);
+  public abstract MetadataTime getMetadataTime();
 
-  public abstract String getMetadataValue();
+  public abstract MetadataTime getMetadataTime(long time);
 
   public abstract long setUpdateTimes(List<Mutation> mutations);
 
@@ -56,49 +44,22 @@
     m.setSystemTimestamp(lastCommitTime);
   }
 
-  public static TabletTime getInstance(String metadataValue) {
-    if (metadataValue.charAt(0) == LOGICAL_TIME_ID) {
-      return new LogicalTime(Long.parseLong(metadataValue.substring(1)));
-    } else if (metadataValue.charAt(0) == MILLIS_TIME_ID) {
-      return new MillisTime(Long.parseLong(metadataValue.substring(1)));
-    }
+  public static TabletTime getInstance(MetadataTime metadataTime) throws IllegalArgumentException {
 
-    throw new IllegalArgumentException("Time type unknown : " + metadataValue);
-
+    if (metadataTime.getType().equals(TimeType.LOGICAL)) {
+      return new LogicalTime(metadataTime.getTime());
+    } else if (metadataTime.getType().equals(TimeType.MILLIS)) {
+      return new MillisTime(metadataTime.getTime());
+    } else // this should really never happen here
+      throw new IllegalArgumentException("Time type unknown : " + metadataTime);
   }
 
-  public static String maxMetadataTime(String mv1, String mv2) {
-    if (mv1 == null && mv2 == null) {
-      return null;
-    }
+  public static MetadataTime maxMetadataTime(MetadataTime mv1, MetadataTime mv2) {
+    // null value will sort lower
+    if (mv1 == null || mv2 == null)
+      return mv1 == null ? (mv2 == null ? null : mv2) : mv1;
 
-    if (mv1 == null) {
-      checkType(mv2);
-      return mv2;
-    }
-
-    if (mv2 == null) {
-      checkType(mv1);
-      return mv1;
-    }
-
-    if (mv1.charAt(0) != mv2.charAt(0))
-      throw new IllegalArgumentException("Time types differ " + mv1 + " " + mv2);
-    checkType(mv1);
-
-    long t1 = Long.parseLong(mv1.substring(1));
-    long t2 = Long.parseLong(mv2.substring(1));
-
-    if (t1 < t2)
-      return mv2;
-    else
-      return mv1;
-
-  }
-
-  private static void checkType(String mv1) {
-    if (mv1.charAt(0) != LOGICAL_TIME_ID && mv1.charAt(0) != MILLIS_TIME_ID)
-      throw new IllegalArgumentException("Invalid time type " + mv1);
+    return mv1.compareTo(mv2) < 0 ? mv2 : mv1;
   }
 
   static class MillisTime extends TabletTime {
@@ -111,13 +72,13 @@
     }
 
     @Override
-    public String getMetadataValue(long time) {
-      return MILLIS_TIME_ID + "" + time;
+    public MetadataTime getMetadataTime() {
+      return getMetadataTime(lastTime);
     }
 
     @Override
-    public String getMetadataValue() {
-      return getMetadataValue(lastTime);
+    public MetadataTime getMetadataTime(long time) {
+      return new MetadataTime(time, TimeType.MILLIS);
     }
 
     @Override
@@ -196,13 +157,13 @@
     }
 
     @Override
-    public String getMetadataValue() {
-      return getMetadataValue(getTime());
+    public MetadataTime getMetadataTime() {
+      return getMetadataTime(getTime());
     }
 
     @Override
-    public String getMetadataValue(long time) {
-      return LOGICAL_TIME_ID + "" + time;
+    public MetadataTime getMetadataTime(long time) {
+      return new MetadataTime(time, TimeType.LOGICAL);
     }
 
     @Override
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/ChangeSecret.java b/server/base/src/main/java/org/apache/accumulo/server/util/ChangeSecret.java
index 7980462..a8a7d3b 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/ChangeSecret.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/ChangeSecret.java
@@ -61,7 +61,7 @@
   }
 
   public static void main(String[] args) throws Exception {
-    SiteConfiguration siteConfig = new SiteConfiguration();
+    var siteConfig = SiteConfiguration.auto();
     VolumeManager fs = VolumeManagerImpl.get(siteConfig, new Configuration());
     verifyHdfsWritePermission(fs);
 
@@ -108,12 +108,10 @@
         context.getZooKeepersSessionTimeOut(), oldPassword);
     String root = context.getZooKeeperRoot();
     final List<String> ephemerals = new ArrayList<>();
-    recurse(zooReader, root, new Visitor() {
-      @Override
-      public void visit(ZooReader zoo, String path) throws Exception {
-        Stat stat = zoo.getStatus(path);
-        if (stat.getEphemeralOwner() != 0)
-          ephemerals.add(path);
+    recurse(zooReader, root, (zoo, path) -> {
+      Stat stat = zoo.getStatus(path);
+      if (stat.getEphemeralOwner() != 0) {
+        ephemerals.add(path);
       }
     });
     if (ephemerals.size() > 0) {
@@ -133,28 +131,25 @@
         context.getZooKeepersSessionTimeOut(), newPass);
 
     String root = context.getZooKeeperRoot();
-    recurse(orig, root, new Visitor() {
-      @Override
-      public void visit(ZooReader zoo, String path) throws Exception {
-        String newPath = path.replace(context.getInstanceID(), newInstanceId);
-        byte[] data = zoo.getData(path, null);
-        List<ACL> acls = orig.getZooKeeper().getACL(path, new Stat());
-        if (acls.containsAll(Ids.READ_ACL_UNSAFE)) {
-          new_.putPersistentData(newPath, data, NodeExistsPolicy.FAIL);
-        } else {
-          // upgrade
-          if (acls.containsAll(Ids.OPEN_ACL_UNSAFE)) {
-            // make user nodes private, they contain the user's password
-            String[] parts = path.split("/");
-            if (parts[parts.length - 2].equals("users")) {
-              new_.putPrivatePersistentData(newPath, data, NodeExistsPolicy.FAIL);
-            } else {
-              // everything else can have the readable acl
-              new_.putPersistentData(newPath, data, NodeExistsPolicy.FAIL);
-            }
-          } else {
+    recurse(orig, root, (zoo, path) -> {
+      String newPath = path.replace(context.getInstanceID(), newInstanceId);
+      byte[] data = zoo.getData(path, null);
+      List<ACL> acls = orig.getZooKeeper().getACL(path, new Stat());
+      if (acls.containsAll(Ids.READ_ACL_UNSAFE)) {
+        new_.putPersistentData(newPath, data, NodeExistsPolicy.FAIL);
+      } else {
+        // upgrade
+        if (acls.containsAll(Ids.OPEN_ACL_UNSAFE)) {
+          // make user nodes private, they contain the user's password
+          String[] parts = path.split("/");
+          if (parts[parts.length - 2].equals("users")) {
             new_.putPrivatePersistentData(newPath, data, NodeExistsPolicy.FAIL);
+          } else {
+            // everything else can have the readable acl
+            new_.putPersistentData(newPath, data, NodeExistsPolicy.FAIL);
           }
+        } else {
+          new_.putPrivatePersistentData(newPath, data, NodeExistsPolicy.FAIL);
         }
       }
     });
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/CleanZookeeper.java b/server/base/src/main/java/org/apache/accumulo/server/util/CleanZookeeper.java
index b2ce11b..aee6ae0 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/CleanZookeeper.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/CleanZookeeper.java
@@ -50,7 +50,7 @@
     Opts opts = new Opts();
     opts.parseArgs(CleanZookeeper.class.getName(), args);
 
-    try (ServerContext context = new ServerContext(new SiteConfiguration())) {
+    try (var context = new ServerContext(SiteConfiguration.auto())) {
 
       String root = Constants.ZROOT;
       IZooReaderWriter zk = context.getZooReaderWriter();
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/DefaultMap.java b/server/base/src/main/java/org/apache/accumulo/server/util/DefaultMap.java
index 027cc92..5cca4b3 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/DefaultMap.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/DefaultMap.java
@@ -50,7 +50,7 @@
   @SuppressWarnings("unchecked")
   private V construct() {
     try {
-      return (V) dfault.getClass().newInstance();
+      return (V) dfault.getClass().getDeclaredConstructor().newInstance();
     } catch (Exception ex) {
       return dfault;
     }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/DeleteZooInstance.java b/server/base/src/main/java/org/apache/accumulo/server/util/DeleteZooInstance.java
index de06b32..7e37415 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/DeleteZooInstance.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/DeleteZooInstance.java
@@ -59,7 +59,7 @@
     Opts opts = new Opts();
     opts.parseArgs(DeleteZooInstance.class.getName(), args);
 
-    ZooReaderWriter zk = new ZooReaderWriter(new SiteConfiguration());
+    var zk = new ZooReaderWriter(SiteConfiguration.auto());
     // try instance name:
     Set<String> instances = new HashSet<>(zk.getChildren(Constants.ZROOT + Constants.ZINSTANCES));
     Set<String> uuids = new HashSet<>(zk.getChildren(Constants.ZROOT));
@@ -74,8 +74,9 @@
       for (String instance : instances) {
         String path = Constants.ZROOT + Constants.ZINSTANCES + "/" + instance;
         byte[] data = zk.getData(path, null);
-        if (opts.instance.equals(new String(data, UTF_8)))
+        if (opts.instance.equals(new String(data, UTF_8))) {
           deleteRetry(zk, path);
+        }
       }
       deleteRetry(zk, Constants.ZROOT + "/" + opts.instance);
     }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/FindOfflineTablets.java b/server/base/src/main/java/org/apache/accumulo/server/util/FindOfflineTablets.java
index b0d8115..3ff7822 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/FindOfflineTablets.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/FindOfflineTablets.java
@@ -70,7 +70,8 @@
     tservers.startListeningForTabletServerChanges();
     scanning.set(true);
 
-    Iterator<TabletLocationState> zooScanner = new ZooTabletStateStore(context).iterator();
+    Iterator<TabletLocationState> zooScanner =
+        new ZooTabletStateStore(context.getAmple()).iterator();
 
     int offline = 0;
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/Info.java b/server/base/src/main/java/org/apache/accumulo/server/util/Info.java
index 5a768c7..7b16159 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/Info.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/Info.java
@@ -44,7 +44,7 @@
 
   @Override
   public void execute(final String[] args) throws KeeperException, InterruptedException {
-    ServerContext context = new ServerContext(new SiteConfiguration());
+    var context = new ServerContext(SiteConfiguration.auto());
     System.out.println("monitor: " + MonitorUtil.getLocation(context));
     System.out.println("masters: " + context.getMasterLocations());
     System.out.println("zookeepers: " + context.getZooKeepers());
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/ListInstances.java b/server/base/src/main/java/org/apache/accumulo/server/util/ListInstances.java
index 17c8798..9e9333d 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/ListInstances.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/ListInstances.java
@@ -65,7 +65,7 @@
     opts.parseArgs(ListInstances.class.getName(), args);
 
     if (opts.keepers == null) {
-      SiteConfiguration siteConfig = new SiteConfiguration();
+      var siteConfig = SiteConfiguration.auto();
       opts.keepers = siteConfig.get(Property.INSTANCE_ZK_HOST);
     }
 
@@ -126,8 +126,9 @@
     public void formatTo(Formatter formatter, int flags, int width, int precision) {
 
       StringBuilder sb = new StringBuilder();
-      for (int i = 0; i < width; i++)
+      for (int i = 0; i < width; i++) {
         sb.append(c);
+      }
       formatter.format(sb.toString());
     }
 
@@ -212,8 +213,9 @@
       List<String> children = zk.getChildren(Constants.ZROOT);
 
       for (String iid : children) {
-        if (iid.equals("instances"))
+        if (iid.equals("instances")) {
           continue;
+        }
         try {
           ts.add(UUID.fromString(iid));
         } catch (Exception e) {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/ListVolumesUsed.java b/server/base/src/main/java/org/apache/accumulo/server/util/ListVolumesUsed.java
index b8dec24..1417dd5 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/ListVolumesUsed.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/ListVolumesUsed.java
@@ -16,7 +16,7 @@
  */
 package org.apache.accumulo.server.util;
 
-import java.util.ArrayList;
+import java.util.Iterator;
 import java.util.Map.Entry;
 import java.util.TreeSet;
 
@@ -24,9 +24,10 @@
 import org.apache.accumulo.core.conf.SiteConfiguration;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.Ample;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.tabletserver.log.LogEntry;
 import org.apache.accumulo.server.ServerContext;
@@ -37,20 +38,22 @@
 public class ListVolumesUsed {
 
   public static void main(String[] args) throws Exception {
-    listVolumes(new ServerContext(new SiteConfiguration()));
+    listVolumes(new ServerContext(SiteConfiguration.auto()));
   }
 
   private static String getTableURI(String rootTabletDir) {
     Path ret = FileType.TABLE.getVolume(new Path(rootTabletDir));
-    if (ret == null)
+    if (ret == null) {
       return "RELATIVE";
+    }
     return ret.toString();
   }
 
   private static String getLogURI(String logEntry) {
     Path ret = FileType.WAL.getVolume(new Path(logEntry));
-    if (ret == null)
+    if (ret == null) {
       return "RELATIVE";
+    }
     return ret.toString();
   }
 
@@ -62,23 +65,25 @@
     System.out.println("Listing volumes referenced in zookeeper");
     TreeSet<String> volumes = new TreeSet<>();
 
-    volumes.add(getTableURI(MetadataTableUtil.getRootTabletDir(context)));
-    ArrayList<LogEntry> result = new ArrayList<>();
-    MetadataTableUtil.getRootLogEntries(context, result);
-    for (LogEntry logEntry : result) {
+    TabletMetadata rootMeta = context.getAmple().readTablet(RootTable.EXTENT);
+
+    volumes.add(getTableURI(rootMeta.getDir()));
+
+    for (LogEntry logEntry : rootMeta.getLogs()) {
       getLogURIs(volumes, logEntry);
     }
 
-    for (String volume : volumes)
+    for (String volume : volumes) {
       System.out.println("\tVolume : " + volume);
+    }
 
   }
 
-  private static void listTable(String name, ServerContext context) throws Exception {
+  private static void listTable(Ample.DataLevel level, ServerContext context) throws Exception {
 
-    System.out.println("Listing volumes referenced in " + name + " tablets section");
+    System.out.println("Listing volumes referenced in " + level + " tablets section");
 
-    Scanner scanner = context.createScanner(name, Authorizations.EMPTY);
+    Scanner scanner = context.createScanner(level.metaTable(), Authorizations.EMPTY);
 
     scanner.setRange(MetadataSchema.TabletsSection.getRange());
     scanner.fetchColumnFamily(MetadataSchema.TabletsSection.DataFileColumnFamily.NAME);
@@ -101,45 +106,40 @@
       }
     }
 
-    for (String volume : volumes)
+    for (String volume : volumes) {
       System.out.println("\tVolume : " + volume);
-
-    volumes.clear();
-
-    scanner.clearColumns();
-    scanner.setRange(MetadataSchema.DeletesSection.getRange());
-
-    for (Entry<Key,Value> entry : scanner) {
-      String delPath = entry.getKey().getRow().toString()
-          .substring(MetadataSchema.DeletesSection.getRowPrefix().length());
-      volumes.add(getTableURI(delPath));
     }
 
-    System.out.println("Listing volumes referenced in " + name
+    System.out.println("Listing volumes referenced in " + level
         + " deletes section (volume replacement occurrs at deletion time)");
+    volumes.clear();
 
-    for (String volume : volumes)
+    Iterator<String> delPaths = context.getAmple().getGcCandidates(level, "");
+    while (delPaths.hasNext()) {
+      volumes.add(getTableURI(delPaths.next()));
+    }
+    for (String volume : volumes) {
       System.out.println("\tVolume : " + volume);
+    }
 
+    System.out.println("Listing volumes referenced in " + level + " current logs");
     volumes.clear();
 
     WalStateManager wals = new WalStateManager(context);
     for (Path path : wals.getAllState().keySet()) {
       volumes.add(getLogURI(path.toString()));
     }
-
-    System.out.println("Listing volumes referenced in " + name + " current logs");
-
-    for (String volume : volumes)
+    for (String volume : volumes) {
       System.out.println("\tVolume : " + volume);
+    }
   }
 
   public static void listVolumes(ServerContext context) throws Exception {
     listZookeeper(context);
     System.out.println();
-    listTable(RootTable.NAME, context);
+    listTable(Ample.DataLevel.METADATA, context);
     System.out.println();
-    listTable(MetadataTable.NAME, context);
+    listTable(Ample.DataLevel.USER, context);
   }
 
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/LoginProperties.java b/server/base/src/main/java/org/apache/accumulo/server/util/LoginProperties.java
index 0544afe..f83ba80 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/LoginProperties.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/LoginProperties.java
@@ -43,11 +43,11 @@
 
   @Override
   public void execute(String[] args) throws Exception {
-    try (ServerContext context = new ServerContext(new SiteConfiguration())) {
+    try (var context = new ServerContext(SiteConfiguration.auto())) {
       AccumuloConfiguration config = context.getServerConfFactory().getSystemConfiguration();
       Authenticator authenticator = AccumuloVFSClassLoader.getClassLoader()
           .loadClass(config.get(Property.INSTANCE_SECURITY_AUTHENTICATOR))
-          .asSubclass(Authenticator.class).newInstance();
+          .asSubclass(Authenticator.class).getDeclaredConstructor().newInstance();
 
       System.out
           .println("Supported token types for " + authenticator.getClass().getName() + " are : ");
@@ -56,7 +56,8 @@
         System.out
             .println("\t" + tokenType.getName() + ", which accepts the following properties : ");
 
-        for (TokenProperty tokenProperty : tokenType.newInstance().getProperties()) {
+        for (TokenProperty tokenProperty : tokenType.getDeclaredConstructor().newInstance()
+            .getProperties()) {
           System.out.println("\t\t" + tokenProperty);
         }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/MasterMetadataUtil.java b/server/base/src/main/java/org/apache/accumulo/server/util/MasterMetadataUtil.java
index 1b053da..55cc43b 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/MasterMetadataUtil.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/MasterMetadataUtil.java
@@ -16,7 +16,6 @@
  */
 package org.apache.accumulo.server.util;
 
-import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.apache.accumulo.fate.util.UtilWaitThread.sleepUninterruptibly;
 
 import java.util.ArrayList;
@@ -34,24 +33,21 @@
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.clientImpl.ScannerImpl;
 import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.PartialKey;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.TableId;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.dataImpl.KeyExtent;
 import org.apache.accumulo.core.metadata.MetadataTable;
+import org.apache.accumulo.core.metadata.schema.Ample.TabletMutator;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.DataFileColumnFamily;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LogColumnFamily;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ScanFileColumnFamily;
+import org.apache.accumulo.core.metadata.schema.MetadataTime;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.LocationType;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.ColumnFQ;
-import org.apache.accumulo.fate.FateTxId;
-import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.fate.zookeeper.ZooLock;
-import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeMissingPolicy;
 import org.apache.accumulo.server.ServerContext;
 import org.apache.accumulo.server.fs.FileRef;
 import org.apache.accumulo.server.master.state.TServerInstance;
@@ -66,75 +62,57 @@
 
   public static void addNewTablet(ServerContext context, KeyExtent extent, String path,
       TServerInstance location, Map<FileRef,DataFileValue> datafileSizes,
-      Map<Long,? extends Collection<FileRef>> bulkLoadedFiles, String time, long lastFlushID,
+      Map<Long,? extends Collection<FileRef>> bulkLoadedFiles, MetadataTime time, long lastFlushID,
       long lastCompactID, ZooLock zooLock) {
-    Mutation m = extent.getPrevRowUpdateMutation();
 
-    TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(m, new Value(path.getBytes(UTF_8)));
-    TabletsSection.ServerColumnFamily.TIME_COLUMN.put(m, new Value(time.getBytes(UTF_8)));
+    TabletMutator tablet = context.getAmple().mutateTablet(extent);
+    tablet.putPrevEndRow(extent.getPrevEndRow());
+    tablet.putZooLock(zooLock);
+    tablet.putDir(path);
+    tablet.putTime(time);
+
     if (lastFlushID > 0)
-      TabletsSection.ServerColumnFamily.FLUSH_COLUMN.put(m,
-          new Value(("" + lastFlushID).getBytes()));
+      tablet.putFlushId(lastFlushID);
+
     if (lastCompactID > 0)
-      TabletsSection.ServerColumnFamily.COMPACT_COLUMN.put(m,
-          new Value(("" + lastCompactID).getBytes()));
+      tablet.putCompactionId(lastCompactID);
 
     if (location != null) {
-      location.putLocation(m);
-      location.clearFutureLocation(m);
+      tablet.putLocation(location, LocationType.CURRENT);
+      tablet.deleteLocation(location, LocationType.FUTURE);
     }
 
-    for (Entry<FileRef,DataFileValue> entry : datafileSizes.entrySet()) {
-      m.put(DataFileColumnFamily.NAME, entry.getKey().meta(), new Value(entry.getValue().encode()));
-    }
+    datafileSizes.forEach(tablet::putFile);
 
     for (Entry<Long,? extends Collection<FileRef>> entry : bulkLoadedFiles.entrySet()) {
-      Value tidVal = new Value(FateTxId.formatTid(entry.getKey()));
       for (FileRef ref : entry.getValue()) {
-        m.put(TabletsSection.BulkFileColumnFamily.NAME, ref.meta(), tidVal);
+        tablet.putBulkFile(ref, entry.getKey().longValue());
       }
     }
 
-    MetadataTableUtil.update(context, zooLock, m, extent);
+    tablet.mutate();
   }
 
-  public static KeyExtent fixSplit(ServerContext context, Text metadataEntry,
-      SortedMap<ColumnFQ,Value> columns, ZooLock lock) throws AccumuloException {
-    log.info("Incomplete split {} attempting to fix", metadataEntry);
+  public static KeyExtent fixSplit(ServerContext context, TabletMetadata meta, ZooLock lock)
+      throws AccumuloException {
+    log.info("Incomplete split {} attempting to fix", meta.getExtent());
 
-    Value oper = columns.get(TabletsSection.TabletColumnFamily.OLD_PREV_ROW_COLUMN);
-
-    if (columns.get(TabletsSection.TabletColumnFamily.SPLIT_RATIO_COLUMN) == null) {
+    if (meta.getSplitRatio() == null) {
       throw new IllegalArgumentException(
-          "Metadata entry does not have split ratio (" + metadataEntry + ")");
+          "Metadata entry does not have split ratio (" + meta.getExtent() + ")");
     }
 
-    double splitRatio = Double.parseDouble(
-        new String(columns.get(TabletsSection.TabletColumnFamily.SPLIT_RATIO_COLUMN).get(), UTF_8));
-
-    Value prevEndRowIBW = columns.get(TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN);
-
-    if (prevEndRowIBW == null) {
+    if (meta.getTime() == null) {
       throw new IllegalArgumentException(
-          "Metadata entry does not have prev row (" + metadataEntry + ")");
+          "Metadata entry does not have time (" + meta.getExtent() + ")");
     }
 
-    Value time = columns.get(TabletsSection.ServerColumnFamily.TIME_COLUMN);
-
-    if (time == null) {
-      throw new IllegalArgumentException(
-          "Metadata entry does not have time (" + metadataEntry + ")");
-    }
-
-    Text metadataPrevEndRow = KeyExtent.decodePrevEndRow(prevEndRowIBW);
-
-    TableId tableId = (new KeyExtent(metadataEntry, (Text) null)).getTableId();
-
-    return fixSplit(context, tableId, metadataEntry, metadataPrevEndRow, oper, splitRatio, lock);
+    return fixSplit(context, meta.getTableId(), meta.getExtent().getMetadataEntry(),
+        meta.getPrevEndRow(), meta.getOldPrevEndRow(), meta.getSplitRatio(), lock);
   }
 
   private static KeyExtent fixSplit(ServerContext context, TableId tableId, Text metadataEntry,
-      Text metadataPrevEndRow, Value oper, double splitRatio, ZooLock lock)
+      Text metadataPrevEndRow, Text oper, double splitRatio, ZooLock lock)
       throws AccumuloException {
     if (metadataPrevEndRow == null)
       // something is wrong, this should not happen... if a tablet is split, it will always have a
@@ -150,9 +128,8 @@
 
       if (!scanner2.iterator().hasNext()) {
         log.info("Rolling back incomplete split {} {}", metadataEntry, metadataPrevEndRow);
-        MetadataTableUtil.rollBackSplit(metadataEntry, KeyExtent.decodePrevEndRow(oper), context,
-            lock);
-        return new KeyExtent(metadataEntry, KeyExtent.decodePrevEndRow(oper));
+        MetadataTableUtil.rollBackSplit(metadataEntry, oper, context, lock);
+        return new KeyExtent(metadataEntry, oper);
       } else {
         log.info("Finishing incomplete split {} {}", metadataEntry, metadataPrevEndRow);
 
@@ -210,35 +187,29 @@
       DataFileValue size, String address, TServerInstance lastLocation, ZooLock zooLock,
       boolean insertDeleteFlags) {
 
-    if (insertDeleteFlags) {
-      // add delete flags for those paths before the data file reference is removed
-      MetadataTableUtil.addDeleteEntries(extent, datafilesToDelete, context);
-    }
+    context.getAmple().putGcCandidates(extent.getTableId(), datafilesToDelete);
 
-    // replace data file references to old mapfiles with the new mapfiles
-    Mutation m = new Mutation(extent.getMetadataEntry());
+    TabletMutator tablet = context.getAmple().mutateTablet(extent);
 
-    for (FileRef pathToRemove : datafilesToDelete)
-      m.putDelete(DataFileColumnFamily.NAME, pathToRemove.meta());
-
-    for (FileRef scanFile : scanFiles)
-      m.put(ScanFileColumnFamily.NAME, scanFile.meta(), new Value(new byte[0]));
+    datafilesToDelete.forEach(tablet::deleteFile);
+    scanFiles.forEach(tablet::putScan);
 
     if (size.getNumEntries() > 0)
-      m.put(DataFileColumnFamily.NAME, path.meta(), new Value(size.encode()));
+      tablet.putFile(path, size);
 
     if (compactionId != null)
-      TabletsSection.ServerColumnFamily.COMPACT_COLUMN.put(m,
-          new Value(("" + compactionId).getBytes()));
+      tablet.putCompactionId(compactionId);
 
     TServerInstance self = getTServerInstance(address, zooLock);
-    self.putLastLocation(m);
+    tablet.putLocation(self, LocationType.LAST);
 
     // remove the old location
     if (lastLocation != null && !lastLocation.equals(self))
-      lastLocation.clearLastLocation(m);
+      tablet.deleteLocation(lastLocation, LocationType.LAST);
 
-    MetadataTableUtil.update(context, zooLock, m, extent);
+    tablet.putZooLock(zooLock);
+
+    tablet.mutate();
   }
 
   /**
@@ -249,80 +220,35 @@
    *
    */
   public static void updateTabletDataFile(ServerContext context, KeyExtent extent, FileRef path,
-      FileRef mergeFile, DataFileValue dfv, String time, Set<FileRef> filesInUseByScans,
+      FileRef mergeFile, DataFileValue dfv, MetadataTime time, Set<FileRef> filesInUseByScans,
       String address, ZooLock zooLock, Set<String> unusedWalLogs, TServerInstance lastLocation,
       long flushId) {
-    if (extent.isRootTablet()) {
-      if (unusedWalLogs != null) {
-        updateRootTabletDataFile(context, unusedWalLogs);
-      }
-      return;
-    }
-    Mutation m = getUpdateForTabletDataFile(extent, path, mergeFile, dfv, time, filesInUseByScans,
-        address, zooLock, unusedWalLogs, lastLocation, flushId);
-    MetadataTableUtil.update(context, zooLock, m, extent);
-  }
 
-  /**
-   * Update the data file for the root tablet
-   */
-  private static void updateRootTabletDataFile(ServerContext context, Set<String> unusedWalLogs) {
-    IZooReaderWriter zk = context.getZooReaderWriter();
-    String root = MetadataTableUtil.getZookeeperLogLocation(context);
-    for (String entry : unusedWalLogs) {
-      String[] parts = entry.split("/");
-      String zpath = root + "/" + parts[parts.length - 1];
-      while (true) {
-        try {
-          if (zk.exists(zpath)) {
-            log.debug("Removing WAL reference for root table {}", zpath);
-            zk.recursiveDelete(zpath, NodeMissingPolicy.SKIP);
-          }
-          break;
-        } catch (KeeperException | InterruptedException e) {
-          log.error("{}", e.getMessage(), e);
-        }
-        sleepUninterruptibly(1, TimeUnit.SECONDS);
-      }
-    }
-  }
-
-  /**
-   * Create an update that updates a tablet
-   *
-   * @return A Mutation to update a tablet from the given information
-   */
-  private static Mutation getUpdateForTabletDataFile(KeyExtent extent, FileRef path,
-      FileRef mergeFile, DataFileValue dfv, String time, Set<FileRef> filesInUseByScans,
-      String address, ZooLock zooLock, Set<String> unusedWalLogs, TServerInstance lastLocation,
-      long flushId) {
-    Mutation m = new Mutation(extent.getMetadataEntry());
+    TabletMutator tablet = context.getAmple().mutateTablet(extent);
 
     if (dfv.getNumEntries() > 0) {
-      m.put(DataFileColumnFamily.NAME, path.meta(), new Value(dfv.encode()));
-      TabletsSection.ServerColumnFamily.TIME_COLUMN.put(m, new Value(time.getBytes(UTF_8)));
-      // stuff in this location
+      tablet.putFile(path, dfv);
+      tablet.putTime(time);
+
       TServerInstance self = getTServerInstance(address, zooLock);
-      self.putLastLocation(m);
-      // erase the old location
-      if (lastLocation != null && !lastLocation.equals(self))
-        lastLocation.clearLastLocation(m);
-    }
-    if (unusedWalLogs != null) {
-      for (String entry : unusedWalLogs) {
-        m.putDelete(LogColumnFamily.NAME, new Text(entry));
+      tablet.putLocation(self, LocationType.LAST);
+
+      // remove the old location
+      if (lastLocation != null && !lastLocation.equals(self)) {
+        tablet.deleteLocation(lastLocation, LocationType.LAST);
       }
     }
+    tablet.putFlushId(flushId);
 
-    for (FileRef scanFile : filesInUseByScans)
-      m.put(ScanFileColumnFamily.NAME, scanFile.meta(), new Value(new byte[0]));
+    if (mergeFile != null) {
+      tablet.deleteFile(mergeFile);
+    }
 
-    if (mergeFile != null)
-      m.putDelete(DataFileColumnFamily.NAME, mergeFile.meta());
+    unusedWalLogs.forEach(tablet::deleteWal);
+    filesInUseByScans.forEach(tablet::putScan);
 
-    TabletsSection.ServerColumnFamily.FLUSH_COLUMN.put(m,
-        new Value(Long.toString(flushId).getBytes(UTF_8)));
+    tablet.putZooLock(zooLock);
 
-    return m;
+    tablet.mutate();
   }
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/MetadataTableUtil.java b/server/base/src/main/java/org/apache/accumulo/server/util/MetadataTableUtil.java
index 8576bf7..937e686 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/MetadataTableUtil.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/MetadataTableUtil.java
@@ -17,6 +17,14 @@
 package org.apache.accumulo.server.util;
 
 import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.CLONED;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.DIR;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.FILES;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LAST;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LOCATION;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LOGS;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.PREV_ROW;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.TIME;
 import static org.apache.accumulo.fate.util.UtilWaitThread.sleepUninterruptibly;
 
 import java.io.IOException;
@@ -43,28 +51,28 @@
 import org.apache.accumulo.core.client.MutationsRejectedException;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.client.admin.TimeType;
 import org.apache.accumulo.core.clientImpl.BatchWriterImpl;
 import org.apache.accumulo.core.clientImpl.Credentials;
 import org.apache.accumulo.core.clientImpl.ScannerImpl;
 import org.apache.accumulo.core.clientImpl.Writer;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.PartialKey;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.TableId;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.dataImpl.KeyExtent;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.Ample.TabletMutator;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.BulkFileColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ChoppedColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ClonedColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.DataFileColumnFamily;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LogColumnFamily;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ScanFileColumnFamily;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily;
+import org.apache.accumulo.core.metadata.schema.MetadataTime;
 import org.apache.accumulo.core.metadata.schema.TabletDeletedException;
 import org.apache.accumulo.core.metadata.schema.TabletMetadata;
 import org.apache.accumulo.core.metadata.schema.TabletsMetadata;
@@ -75,32 +83,27 @@
 import org.apache.accumulo.core.util.FastFormat;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.fate.FateTxId;
-import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.fate.zookeeper.ZooLock;
-import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
-import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeMissingPolicy;
 import org.apache.accumulo.server.ServerConstants;
 import org.apache.accumulo.server.ServerContext;
 import org.apache.accumulo.server.fs.FileRef;
 import org.apache.accumulo.server.fs.VolumeChooserEnvironment;
 import org.apache.accumulo.server.fs.VolumeChooserEnvironmentImpl;
 import org.apache.accumulo.server.fs.VolumeManager;
-import org.apache.hadoop.fs.FileStatus;
+import org.apache.accumulo.server.metadata.ServerAmpleImpl;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
-import org.apache.zookeeper.KeeperException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import com.google.common.annotations.VisibleForTesting;
-import com.google.common.collect.Iterables;
 
 /**
  * provides a reference to the metadata table for updates by tablet servers
  */
 public class MetadataTableUtil {
 
-  private static final Text EMPTY_TEXT = new Text();
+  public static final Text EMPTY_TEXT = new Text();
   private static Map<Credentials,Writer> root_tables = new HashMap<>();
   private static Map<Credentials,Writer> metadata_tables = new HashMap<>();
   private static final Logger log = LoggerFactory.getLogger(MetadataTableUtil.class);
@@ -161,54 +164,51 @@
 
   public static void updateTabletFlushID(KeyExtent extent, long flushID, ServerContext context,
       ZooLock zooLock) {
-    if (!extent.isRootTablet()) {
-      Mutation m = new Mutation(extent.getMetadataEntry());
-      TabletsSection.ServerColumnFamily.FLUSH_COLUMN.put(m,
-          new Value((flushID + "").getBytes(UTF_8)));
-      update(context, zooLock, m, extent);
-    }
+    TabletMutator tablet = context.getAmple().mutateTablet(extent);
+    tablet.putFlushId(flushID);
+    tablet.putZooLock(zooLock);
+    tablet.mutate();
   }
 
   public static void updateTabletCompactID(KeyExtent extent, long compactID, ServerContext context,
       ZooLock zooLock) {
-    if (!extent.isRootTablet()) {
-      Mutation m = new Mutation(extent.getMetadataEntry());
-      TabletsSection.ServerColumnFamily.COMPACT_COLUMN.put(m,
-          new Value((compactID + "").getBytes(UTF_8)));
-      update(context, zooLock, m, extent);
-    }
+    TabletMutator tablet = context.getAmple().mutateTablet(extent);
+    tablet.putCompactionId(compactID);
+    tablet.putZooLock(zooLock);
+    tablet.mutate();
   }
 
   public static void updateTabletDataFile(long tid, KeyExtent extent,
-      Map<FileRef,DataFileValue> estSizes, String time, ServerContext context, ZooLock zooLock) {
-    Mutation m = new Mutation(extent.getMetadataEntry());
-    Value tidValue = new Value(FateTxId.formatTid(tid));
+      Map<FileRef,DataFileValue> estSizes, MetadataTime time, ServerContext context,
+      ZooLock zooLock) {
+    TabletMutator tablet = context.getAmple().mutateTablet(extent);
+    tablet.putTime(time);
+    estSizes.forEach(tablet::putFile);
 
-    for (Entry<FileRef,DataFileValue> entry : estSizes.entrySet()) {
-      Text file = entry.getKey().meta();
-      m.put(DataFileColumnFamily.NAME, file, new Value(entry.getValue().encode()));
-      m.put(TabletsSection.BulkFileColumnFamily.NAME, file, tidValue);
+    for (FileRef file : estSizes.keySet()) {
+      tablet.putBulkFile(file, tid);
     }
-    TabletsSection.ServerColumnFamily.TIME_COLUMN.put(m, new Value(time.getBytes(UTF_8)));
-    update(context, zooLock, m, extent);
+    tablet.putZooLock(zooLock);
+    tablet.mutate();
   }
 
   public static void updateTabletDir(KeyExtent extent, String newDir, ServerContext context,
-      ZooLock lock) {
-    Mutation m = new Mutation(extent.getMetadataEntry());
-    TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(m, new Value(newDir.getBytes(UTF_8)));
-    update(context, lock, m, extent);
+      ZooLock zooLock) {
+    TabletMutator tablet = context.getAmple().mutateTablet(extent);
+    tablet.putDir(newDir);
+    tablet.putZooLock(zooLock);
+    tablet.mutate();
   }
 
-  public static void addTablet(KeyExtent extent, String path, ServerContext context, char timeType,
-      ZooLock lock) {
-    Mutation m = extent.getPrevRowUpdateMutation();
+  public static void addTablet(KeyExtent extent, String path, ServerContext context,
+      TimeType timeType, ZooLock zooLock) {
+    TabletMutator tablet = context.getAmple().mutateTablet(extent);
+    tablet.putPrevEndRow(extent.getPrevEndRow());
+    tablet.putDir(path);
+    tablet.putTime(new MetadataTime(0, timeType));
+    tablet.putZooLock(zooLock);
+    tablet.mutate();
 
-    TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(m, new Value(path.getBytes(UTF_8)));
-    TabletsSection.ServerColumnFamily.TIME_COLUMN.put(m,
-        new Value((timeType + "0").getBytes(UTF_8)));
-
-    update(context, lock, m, extent);
   }
 
   public static void updateTabletVolumes(KeyExtent extent, List<LogEntry> logsToRemove,
@@ -216,96 +216,19 @@
       SortedMap<FileRef,DataFileValue> filesToAdd, String newDir, ZooLock zooLock,
       ServerContext context) {
 
-    if (extent.isRootTablet()) {
-      if (newDir != null)
-        throw new IllegalArgumentException("newDir not expected for " + extent);
+    TabletMutator tabletMutator = context.getAmple().mutateTablet(extent);
+    logsToRemove.forEach(tabletMutator::deleteWal);
+    logsToAdd.forEach(tabletMutator::putWal);
 
-      if (filesToRemove.size() != 0 || filesToAdd.size() != 0)
-        throw new IllegalArgumentException("files not expected for " + extent);
+    filesToRemove.forEach(tabletMutator::deleteFile);
+    filesToAdd.forEach(tabletMutator::putFile);
 
-      // add before removing in case of process death
-      for (LogEntry logEntry : logsToAdd)
-        addRootLogEntry(context, zooLock, logEntry);
+    if (newDir != null)
+      tabletMutator.putDir(newDir);
 
-      removeUnusedWALEntries(context, extent, logsToRemove, zooLock);
-    } else {
-      Mutation m = new Mutation(extent.getMetadataEntry());
+    tabletMutator.putZooLock(zooLock);
 
-      for (LogEntry logEntry : logsToRemove)
-        m.putDelete(logEntry.getColumnFamily(), logEntry.getColumnQualifier());
-
-      for (LogEntry logEntry : logsToAdd)
-        m.put(logEntry.getColumnFamily(), logEntry.getColumnQualifier(), logEntry.getValue());
-
-      for (FileRef fileRef : filesToRemove)
-        m.putDelete(DataFileColumnFamily.NAME, fileRef.meta());
-
-      for (Entry<FileRef,DataFileValue> entry : filesToAdd.entrySet())
-        m.put(DataFileColumnFamily.NAME, entry.getKey().meta(),
-            new Value(entry.getValue().encode()));
-
-      if (newDir != null)
-        ServerColumnFamily.DIRECTORY_COLUMN.put(m, new Value(newDir.getBytes(UTF_8)));
-
-      update(context, m, extent);
-    }
-  }
-
-  private interface ZooOperation {
-    void run(IZooReaderWriter rw) throws KeeperException, InterruptedException, IOException;
-  }
-
-  private static void retryZooKeeperUpdate(ServerContext context, ZooLock zooLock,
-      ZooOperation op) {
-    while (true) {
-      try {
-        IZooReaderWriter zoo = context.getZooReaderWriter();
-        if (zoo.isLockHeld(zooLock.getLockID())) {
-          op.run(zoo);
-        }
-        break;
-      } catch (Exception e) {
-        log.error("Unexpected exception {}", e.getMessage(), e);
-      }
-      sleepUninterruptibly(1, TimeUnit.SECONDS);
-    }
-  }
-
-  private static void addRootLogEntry(ServerContext context, ZooLock zooLock,
-      final LogEntry entry) {
-    retryZooKeeperUpdate(context, zooLock, new ZooOperation() {
-      @Override
-      public void run(IZooReaderWriter rw)
-          throws KeeperException, InterruptedException, IOException {
-        String root = getZookeeperLogLocation(context);
-        rw.putPersistentData(root + "/" + entry.getUniqueID(), entry.toBytes(),
-            NodeExistsPolicy.OVERWRITE);
-      }
-    });
-  }
-
-  public static SortedMap<FileRef,DataFileValue> getDataFileSizes(KeyExtent extent,
-      ServerContext context) {
-    TreeMap<FileRef,DataFileValue> sizes = new TreeMap<>();
-
-    try (Scanner mdScanner = new ScannerImpl(context, MetadataTable.ID, Authorizations.EMPTY)) {
-      mdScanner.fetchColumnFamily(DataFileColumnFamily.NAME);
-      Text row = extent.getMetadataEntry();
-
-      Key endKey = new Key(row, DataFileColumnFamily.NAME, new Text(""));
-      endKey = endKey.followingKey(PartialKey.ROW_COLFAM);
-
-      mdScanner.setRange(new Range(new Key(row), endKey));
-      for (Entry<Key,Value> entry : mdScanner) {
-
-        if (!entry.getKey().getRow().equals(row))
-          break;
-        DataFileValue dfv = new DataFileValue(entry.getValue().get());
-        sizes.put(new FileRef(context.getVolumeManager(), entry.getKey()), dfv);
-      }
-
-      return sizes;
-    }
+    tabletMutator.mutate();
   }
 
   public static void rollBackSplit(Text metadataEntry, Text oldPrevEndRow, ServerContext context,
@@ -361,32 +284,23 @@
     // TODO could use batch writer,would need to handle failure and retry like update does -
     // ACCUMULO-1294
     for (FileRef pathToRemove : datafilesToDelete) {
-      update(context, createDeleteMutation(context, tableId, pathToRemove.path().toString()),
+      update(context,
+          ServerAmpleImpl.createDeleteMutation(context, tableId, pathToRemove.path().toString()),
           extent);
     }
   }
 
   public static void addDeleteEntry(ServerContext context, TableId tableId, String path) {
-    update(context, createDeleteMutation(context, tableId, path),
+    update(context, ServerAmpleImpl.createDeleteMutation(context, tableId, path),
         new KeyExtent(tableId, null, null));
   }
 
-  public static Mutation createDeleteMutation(ServerContext context, TableId tableId,
-      String pathToRemove) {
-    Path path = context.getVolumeManager().getFullPath(tableId, pathToRemove);
-    Mutation delFlag = new Mutation(new Text(MetadataSchema.DeletesSection.getRowPrefix() + path));
-    delFlag.put(EMPTY_TEXT, EMPTY_TEXT, new Value(new byte[] {}));
-    return delFlag;
-  }
-
   public static void removeScanFiles(KeyExtent extent, Set<FileRef> scanFiles,
       ServerContext context, ZooLock zooLock) {
-    Mutation m = new Mutation(extent.getMetadataEntry());
-
-    for (FileRef pathToRemove : scanFiles)
-      m.putDelete(ScanFileColumnFamily.NAME, pathToRemove.meta());
-
-    update(context, zooLock, m, extent);
+    TabletMutator tablet = context.getAmple().mutateTablet(extent);
+    scanFiles.forEach(tablet::deleteScan);
+    tablet.putZooLock(zooLock);
+    tablet.mutate();
   }
 
   public static void splitDatafiles(Text midRow, double splitRatio,
@@ -460,11 +374,13 @@
 
           if (key.getColumnFamily().equals(DataFileColumnFamily.NAME)) {
             FileRef ref = new FileRef(context.getVolumeManager(), key);
-            bw.addMutation(createDeleteMutation(context, tableId, ref.meta().toString()));
+            bw.addMutation(
+                ServerAmpleImpl.createDeleteMutation(context, tableId, ref.meta().toString()));
           }
 
           if (TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.hasColumns(key)) {
-            bw.addMutation(createDeleteMutation(context, tableId, cell.getValue().toString()));
+            bw.addMutation(
+                ServerAmpleImpl.createDeleteMutation(context, tableId, cell.getValue().toString()));
           }
         }
 
@@ -496,206 +412,33 @@
     }
   }
 
-  static String getZookeeperLogLocation(ServerContext context) {
-    return context.getZooKeeperRoot() + RootTable.ZROOT_TABLET_WALOGS;
-  }
-
-  public static void setRootTabletDir(ServerContext context, String dir) throws IOException {
-    IZooReaderWriter zoo = context.getZooReaderWriter();
-    String zpath = context.getZooKeeperRoot() + RootTable.ZROOT_TABLET_PATH;
-    try {
-      zoo.putPersistentData(zpath, dir.getBytes(UTF_8), -1, NodeExistsPolicy.OVERWRITE);
-    } catch (KeeperException e) {
-      throw new IOException(e);
-    } catch (InterruptedException e) {
-      Thread.currentThread().interrupt();
-      throw new IOException(e);
-    }
-  }
-
-  public static String getRootTabletDir(ServerContext context) throws IOException {
-    IZooReaderWriter zoo = context.getZooReaderWriter();
-    String zpath = context.getZooKeeperRoot() + RootTable.ZROOT_TABLET_PATH;
-    try {
-      return new String(zoo.getData(zpath, null), UTF_8);
-    } catch (KeeperException e) {
-      throw new IOException(e);
-    } catch (InterruptedException e) {
-      Thread.currentThread().interrupt();
-      throw new IOException(e);
-    }
-  }
-
   public static Pair<List<LogEntry>,SortedMap<FileRef,DataFileValue>>
-      getFileAndLogEntries(ServerContext context, KeyExtent extent)
-          throws KeeperException, InterruptedException, IOException {
+      getFileAndLogEntries(ServerContext context, KeyExtent extent) throws IOException {
     ArrayList<LogEntry> result = new ArrayList<>();
     TreeMap<FileRef,DataFileValue> sizes = new TreeMap<>();
 
     VolumeManager fs = context.getVolumeManager();
-    if (extent.isRootTablet()) {
-      getRootLogEntries(context, result);
-      Path rootDir = new Path(getRootTabletDir(context));
-      FileStatus[] files = fs.listStatus(rootDir);
-      for (FileStatus fileStatus : files) {
-        if (fileStatus.getPath().toString().endsWith("_tmp")) {
-          continue;
-        }
-        DataFileValue dfv = new DataFileValue(0, 0);
-        sizes.put(new FileRef(fileStatus.getPath().toString(), fileStatus.getPath()), dfv);
-      }
 
-    } else {
-      try (TabletsMetadata tablets = TabletsMetadata.builder().forTablet(extent).fetchFiles()
-          .fetchLogs().fetchPrev().build(context)) {
+    TabletMetadata tablet = context.getAmple().readTablet(extent, FILES, LOGS, PREV_ROW, DIR);
 
-        TabletMetadata tablet = Iterables.getOnlyElement(tablets);
+    if (!tablet.getExtent().equals(extent))
+      throw new RuntimeException("Unexpected extent " + tablet.getExtent() + " expected " + extent);
 
-        if (!tablet.getExtent().equals(extent))
-          throw new RuntimeException(
-              "Unexpected extent " + tablet.getExtent() + " expected " + extent);
+    result.addAll(tablet.getLogs());
 
-        result.addAll(tablet.getLogs());
-        tablet.getFilesMap().forEach((k, v) -> {
-          sizes.put(new FileRef(k, fs.getFullPath(tablet.getTableId(), k)), v);
-        });
-      }
-    }
+    tablet.getFilesMap().forEach((k, v) -> {
+      sizes.put(new FileRef(k, fs.getFullPath(tablet.getTableId(), k)), v);
+    });
 
     return new Pair<>(result, sizes);
   }
 
-  public static List<LogEntry> getLogEntries(ServerContext context, KeyExtent extent)
-      throws IOException, KeeperException, InterruptedException {
-    log.info("Scanning logging entries for {}", extent);
-    ArrayList<LogEntry> result = new ArrayList<>();
-    if (extent.equals(RootTable.EXTENT)) {
-      log.info("Getting logs for root tablet from zookeeper");
-      getRootLogEntries(context, result);
-    } else {
-      log.info("Scanning metadata for logs used for tablet {}", extent);
-      Scanner scanner = getTabletLogScanner(context, extent);
-      Text pattern = extent.getMetadataEntry();
-      for (Entry<Key,Value> entry : scanner) {
-        Text row = entry.getKey().getRow();
-        if (entry.getKey().getColumnFamily().equals(LogColumnFamily.NAME)) {
-          if (row.equals(pattern)) {
-            result.add(LogEntry.fromKeyValue(entry.getKey(), entry.getValue()));
-          }
-        }
-      }
-    }
-
-    log.info("Returning logs {} for extent {}", result, extent);
-    return result;
-  }
-
-  static void getRootLogEntries(ServerContext context, final ArrayList<LogEntry> result)
-      throws KeeperException, InterruptedException, IOException {
-    IZooReaderWriter zoo = context.getZooReaderWriter();
-    String root = getZookeeperLogLocation(context);
-    // there's a little race between getting the children and fetching
-    // the data. The log can be removed in between.
-    while (true) {
-      result.clear();
-      for (String child : zoo.getChildren(root)) {
-        try {
-          LogEntry e = LogEntry.fromBytes(zoo.getData(root + "/" + child, null));
-          // upgrade from !0;!0<< -> +r<<
-          e = new LogEntry(RootTable.EXTENT, 0, e.server, e.filename);
-          result.add(e);
-        } catch (KeeperException.NoNodeException ex) {
-          continue;
-        }
-      }
-      break;
-    }
-  }
-
-  private static Scanner getTabletLogScanner(ServerContext context, KeyExtent extent) {
-    TableId tableId = MetadataTable.ID;
-    if (extent.isMeta())
-      tableId = RootTable.ID;
-    Scanner scanner = new ScannerImpl(context, tableId, Authorizations.EMPTY);
-    scanner.fetchColumnFamily(LogColumnFamily.NAME);
-    Text start = extent.getMetadataEntry();
-    Key endKey = new Key(start, LogColumnFamily.NAME);
-    endKey = endKey.followingKey(PartialKey.ROW_COLFAM);
-    scanner.setRange(new Range(new Key(start), endKey));
-    return scanner;
-  }
-
-  private static class LogEntryIterator implements Iterator<LogEntry> {
-
-    Iterator<LogEntry> zookeeperEntries = null;
-    Iterator<LogEntry> rootTableEntries = null;
-    Iterator<Entry<Key,Value>> metadataEntries = null;
-
-    LogEntryIterator(ServerContext context)
-        throws IOException, KeeperException, InterruptedException {
-      zookeeperEntries = getLogEntries(context, RootTable.EXTENT).iterator();
-      rootTableEntries =
-          getLogEntries(context, new KeyExtent(MetadataTable.ID, null, null)).iterator();
-      try {
-        Scanner scanner = context.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-        log.info("Setting range to {}", MetadataSchema.TabletsSection.getRange());
-        scanner.setRange(MetadataSchema.TabletsSection.getRange());
-        scanner.fetchColumnFamily(LogColumnFamily.NAME);
-        metadataEntries = scanner.iterator();
-      } catch (Exception ex) {
-        throw new IOException(ex);
-      }
-    }
-
-    @Override
-    public boolean hasNext() {
-      return zookeeperEntries.hasNext() || rootTableEntries.hasNext() || metadataEntries.hasNext();
-    }
-
-    @Override
-    public LogEntry next() {
-      if (zookeeperEntries.hasNext()) {
-        return zookeeperEntries.next();
-      }
-      if (rootTableEntries.hasNext()) {
-        return rootTableEntries.next();
-      }
-      Entry<Key,Value> entry = metadataEntries.next();
-      return LogEntry.fromKeyValue(entry.getKey(), entry.getValue());
-    }
-
-    @Override
-    public void remove() {
-      throw new UnsupportedOperationException();
-    }
-  }
-
-  public static Iterator<LogEntry> getLogEntries(ServerContext context)
-      throws IOException, KeeperException, InterruptedException {
-    return new LogEntryIterator(context);
-  }
-
   public static void removeUnusedWALEntries(ServerContext context, KeyExtent extent,
       final List<LogEntry> entries, ZooLock zooLock) {
-    if (extent.isRootTablet()) {
-      retryZooKeeperUpdate(context, zooLock, new ZooOperation() {
-        @Override
-        public void run(IZooReaderWriter rw) throws KeeperException, InterruptedException {
-          String root = getZookeeperLogLocation(context);
-          for (LogEntry entry : entries) {
-            String path = root + "/" + entry.getUniqueID();
-            log.debug("Removing " + path + " from zookeeper");
-            rw.recursiveDelete(path, NodeMissingPolicy.SKIP);
-          }
-        }
-      });
-    } else {
-      Mutation m = new Mutation(extent.getMetadataEntry());
-      for (LogEntry entry : entries) {
-        m.putDelete(entry.getColumnFamily(), entry.getColumnQualifier());
-      }
-      update(context, zooLock, m, extent);
-    }
+    TabletMutator tablet = context.getAmple().mutateTablet(extent);
+    entries.forEach(tablet::deleteWal);
+    tablet.putZooLock(zooLock);
+    tablet.mutate();
   }
 
   private static void getFiles(Set<String> files, Collection<String> tabletFiles,
@@ -753,8 +496,7 @@
     }
 
     return TabletsMetadata.builder().scanTable(tableName).overRange(range).checkConsistency()
-        .saveKeyValues().fetchFiles().fetchLocation().fetchLast().fetchCloned().fetchPrev()
-        .fetchTime().build(client);
+        .saveKeyValues().fetch(FILES, LOCATION, LAST, CLONED, PREV_ROW, TIME).build(client);
   }
 
   @VisibleForTesting
@@ -922,21 +664,10 @@
   }
 
   public static void chopped(ServerContext context, KeyExtent extent, ZooLock zooLock) {
-    Mutation m = new Mutation(extent.getMetadataEntry());
-    ChoppedColumnFamily.CHOPPED_COLUMN.put(m, new Value("chopped".getBytes(UTF_8)));
-    update(context, zooLock, m, extent);
-  }
-
-  public static long getBulkLoadTid(Value v) {
-    String vs = v.toString();
-
-    if (FateTxId.isFormatedTid(vs)) {
-      return FateTxId.fromString(vs);
-    } else {
-      // a new serialization format was introduce in 2.0. This code support deserializing the old
-      // format.
-      return Long.parseLong(vs);
-    }
+    TabletMutator tablet = context.getAmple().mutateTablet(extent);
+    tablet.putChopped();
+    tablet.putZooLock(zooLock);
+    tablet.mutate();
   }
 
   public static void removeBulkLoadEntries(AccumuloClient client, TableId tableId, long tid)
@@ -950,7 +681,7 @@
 
       for (Entry<Key,Value> entry : mscanner) {
         log.trace("Looking at entry {} with tid {}", entry, tid);
-        long entryTid = getBulkLoadTid(entry.getValue());
+        long entryTid = BulkFileColumnFamily.getBulkLoadTid(entry.getValue());
         if (tid == entryTid) {
           log.trace("deleting entry {}", entry);
           Key key = entry.getKey();
@@ -962,25 +693,6 @@
     }
   }
 
-  public static Map<Long,? extends Collection<FileRef>> getBulkFilesLoaded(ServerContext context,
-      KeyExtent extent) {
-    Text metadataRow = extent.getMetadataEntry();
-    Map<Long,List<FileRef>> result = new HashMap<>();
-
-    VolumeManager fs = context.getVolumeManager();
-    try (Scanner scanner = new ScannerImpl(context,
-        extent.isMeta() ? RootTable.ID : MetadataTable.ID, Authorizations.EMPTY)) {
-      scanner.setRange(new Range(metadataRow));
-      scanner.fetchColumnFamily(TabletsSection.BulkFileColumnFamily.NAME);
-      for (Entry<Key,Value> entry : scanner) {
-        Long tid = getBulkLoadTid(entry.getValue());
-        List<FileRef> lst = result.computeIfAbsent(tid, k -> new ArrayList<FileRef>());
-        lst.add(new FileRef(fs, entry.getKey()));
-      }
-    }
-    return result;
-  }
-
   public static void addBulkLoadInProgressFlag(ServerContext context, String path, long fateTxid) {
 
     Mutation m = new Mutation(MetadataSchema.BlipSection.getRowPrefix() + path);
@@ -1031,5 +743,4 @@
 
     return tabletEntries;
   }
-
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java b/server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java
index 4ef08bf..4084df9 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java
@@ -119,7 +119,7 @@
     Opts opts = new Opts();
     opts.parseArgs(RestoreZookeeper.class.getName(), args);
 
-    ZooReaderWriter zoo = new ZooReaderWriter(new SiteConfiguration());
+    var zoo = new ZooReaderWriter(SiteConfiguration.auto());
 
     InputStream in = System.in;
     if (opts.file != null) {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/TabletServerLocks.java b/server/base/src/main/java/org/apache/accumulo/server/util/TabletServerLocks.java
index 52e8d6c..08591ab 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/TabletServerLocks.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/TabletServerLocks.java
@@ -41,7 +41,7 @@
 
   public static void main(String[] args) throws Exception {
 
-    try (ServerContext context = new ServerContext(new SiteConfiguration())) {
+    try (var context = new ServerContext(SiteConfiguration.auto())) {
       String tserverPath = context.getZooKeeperRoot() + Constants.ZTSERVERS;
       Opts opts = new Opts();
       opts.parseArgs(TabletServerLocks.class.getName(), args);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/ZooKeeperMain.java b/server/base/src/main/java/org/apache/accumulo/server/util/ZooKeeperMain.java
index 7760836..b3823bb 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/ZooKeeperMain.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/ZooKeeperMain.java
@@ -64,7 +64,7 @@
   public void execute(final String[] args) throws Exception {
     Opts opts = new Opts();
     opts.parseArgs(ZooKeeperMain.class.getName(), args);
-    try (ServerContext context = new ServerContext(new SiteConfiguration())) {
+    try (var context = new ServerContext(SiteConfiguration.auto())) {
       FileSystem fs = context.getVolumeManager().getDefaultVolume().getFileSystem();
       String baseDir = ServerConstants.getBaseUris(context)[0];
       System.out.println("Using " + fs.makeQualified(new Path(baseDir + "/instance_id"))
@@ -73,8 +73,9 @@
         opts.servers = context.getZooKeepers();
       }
       System.out.println("The accumulo instance id is " + context.getInstanceID());
-      if (!opts.servers.contains("/"))
+      if (!opts.servers.contains("/")) {
         opts.servers += "/accumulo/" + context.getInstanceID();
+      }
       org.apache.zookeeper.ZooKeeperMain
           .main(new String[] {"-server", opts.servers, "-timeout", "" + (opts.timeout * 1000)});
     }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/ZooZap.java b/server/base/src/main/java/org/apache/accumulo/server/util/ZooZap.java
index 05ae191..eb98192 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/ZooZap.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/ZooZap.java
@@ -42,8 +42,9 @@
   private static final Logger log = LoggerFactory.getLogger(ZooZap.class);
 
   private static void message(String msg, Opts opts) {
-    if (opts.verbose)
+    if (opts.verbose) {
       System.out.println(msg);
+    }
   }
 
   static class Opts extends Help {
@@ -66,7 +67,7 @@
       return;
     }
 
-    SiteConfiguration siteConf = new SiteConfiguration();
+    var siteConf = SiteConfiguration.auto();
     Configuration hadoopConf = new Configuration();
     // Login as the server on secure HDFS
     if (siteConf.getBoolean(Property.INSTANCE_RPC_SASL_ENABLED)) {
@@ -95,9 +96,9 @@
         for (String child : children) {
           message("Deleting " + tserversPath + "/" + child + " from zookeeper", opts);
 
-          if (opts.zapMaster)
+          if (opts.zapMaster) {
             zoo.recursiveDelete(tserversPath + "/" + child, NodeMissingPolicy.SKIP);
-          else {
+          } else {
             String path = tserversPath + "/" + child;
             if (zoo.getChildren(path).size() > 0) {
               if (!ZooLock.deleteLock(zoo, path, "tserver")) {
diff --git a/server/base/src/test/java/org/apache/accumulo/server/conf/NamespaceConfigurationTest.java b/server/base/src/test/java/org/apache/accumulo/server/conf/NamespaceConfigurationTest.java
index 7f023e7..1b53027 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/conf/NamespaceConfigurationTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/conf/NamespaceConfigurationTest.java
@@ -17,7 +17,6 @@
 package org.apache.accumulo.server.conf;
 
 import static java.nio.charset.StandardCharsets.UTF_8;
-import static org.easymock.EasyMock.anyObject;
 import static org.easymock.EasyMock.createMock;
 import static org.easymock.EasyMock.eq;
 import static org.easymock.EasyMock.expect;
@@ -25,9 +24,7 @@
 import static org.easymock.EasyMock.verify;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertTrue;
 
-import java.util.Collection;
 import java.util.List;
 import java.util.Map;
 import java.util.Properties;
@@ -37,7 +34,6 @@
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.clientImpl.Namespace;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.ConfigurationObserver;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.NamespaceId;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
@@ -78,8 +74,7 @@
     c.setZooCacheFactory(zcf);
 
     zc = createMock(ZooCache.class);
-    expect(zcf.getZooCache(eq(ZOOKEEPERS), eq(ZK_SESSION_TIMEOUT),
-        anyObject(NamespaceConfWatcher.class))).andReturn(zc);
+    expect(zcf.getZooCache(eq(ZOOKEEPERS), eq(ZK_SESSION_TIMEOUT))).andReturn(zc);
     replay(zcf);
   }
 
@@ -144,18 +139,6 @@
   }
 
   @Test
-  public void testObserver() {
-    ConfigurationObserver o = createMock(ConfigurationObserver.class);
-    c.addObserver(o);
-    Collection<ConfigurationObserver> os = c.getObservers();
-    assertEquals(1, os.size());
-    assertTrue(os.contains(o));
-    c.removeObserver(o);
-    os = c.getObservers();
-    assertEquals(0, os.size());
-  }
-
-  @Test
   public void testInvalidateCache() {
     // need to do a get so the accessor is created
     Property p = Property.INSTANCE_SECRET;
diff --git a/server/base/src/test/java/org/apache/accumulo/server/conf/ServerConfigurationFactoryTest.java b/server/base/src/test/java/org/apache/accumulo/server/conf/ServerConfigurationFactoryTest.java
index 8f29544..7a4f90a 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/conf/ServerConfigurationFactoryTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/conf/ServerConfigurationFactoryTest.java
@@ -37,6 +37,7 @@
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.accumulo.fate.zookeeper.ZooCacheFactory;
 import org.apache.accumulo.server.ServerContext;
+import org.easymock.EasyMock;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.BeforeClass;
@@ -50,14 +51,13 @@
   // use the same mock ZooCacheFactory and ZooCache for all tests
   private static ZooCacheFactory zcf;
   private static ZooCache zc;
-  private static SiteConfiguration siteConfig = new SiteConfiguration();
+  private static SiteConfiguration siteConfig = SiteConfiguration.auto();
 
   @BeforeClass
   public static void setUpClass() {
     zcf = createMock(ZooCacheFactory.class);
     zc = createMock(ZooCache.class);
-    expect(zcf.getZooCache(eq(ZK_HOST), eq(ZK_TIMEOUT), anyObject(NamespaceConfWatcher.class)))
-        .andReturn(zc);
+    expect(zcf.getZooCache(eq(ZK_HOST), eq(ZK_TIMEOUT), EasyMock.anyObject())).andReturn(zc);
     expectLastCall().anyTimes();
     expect(zcf.getZooCache(ZK_HOST, ZK_TIMEOUT)).andReturn(zc);
     expectLastCall().anyTimes();
@@ -107,7 +107,7 @@
   @Test
   public void testGetSiteConfiguration() {
     ready();
-    SiteConfiguration c = scf.getSiteConfiguration();
+    var c = scf.getSiteConfiguration();
     assertNotNull(c);
   }
 
diff --git a/server/base/src/test/java/org/apache/accumulo/server/conf/TableConfigurationTest.java b/server/base/src/test/java/org/apache/accumulo/server/conf/TableConfigurationTest.java
index af932aa..aaa0a95 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/conf/TableConfigurationTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/conf/TableConfigurationTest.java
@@ -17,16 +17,13 @@
 package org.apache.accumulo.server.conf;
 
 import static java.nio.charset.StandardCharsets.UTF_8;
-import static org.easymock.EasyMock.anyObject;
 import static org.easymock.EasyMock.createMock;
 import static org.easymock.EasyMock.eq;
 import static org.easymock.EasyMock.expect;
 import static org.easymock.EasyMock.replay;
 import static org.easymock.EasyMock.verify;
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
 
-import java.util.Collection;
 import java.util.List;
 import java.util.Map;
 import java.util.Properties;
@@ -34,7 +31,6 @@
 import java.util.function.Predicate;
 
 import org.apache.accumulo.core.Constants;
-import org.apache.accumulo.core.conf.ConfigurationObserver;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.TableId;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
@@ -75,9 +71,7 @@
     c.setZooCacheFactory(zcf);
 
     zc = createMock(ZooCache.class);
-    expect(
-        zcf.getZooCache(eq(ZOOKEEPERS), eq(ZK_SESSION_TIMEOUT), anyObject(TableConfWatcher.class)))
-            .andReturn(zc);
+    expect(zcf.getZooCache(eq(ZOOKEEPERS), eq(ZK_SESSION_TIMEOUT))).andReturn(zc);
     replay(zcf);
   }
 
@@ -132,18 +126,6 @@
   }
 
   @Test
-  public void testObserver() {
-    ConfigurationObserver o = createMock(ConfigurationObserver.class);
-    c.addObserver(o);
-    Collection<ConfigurationObserver> os = c.getObservers();
-    assertEquals(1, os.size());
-    assertTrue(os.contains(o));
-    c.removeObserver(o);
-    os = c.getObservers();
-    assertEquals(0, os.size());
-  }
-
-  @Test
   public void testInvalidateCache() {
     // need to do a get so the accessor is created
     Property p = Property.INSTANCE_SECRET;
diff --git a/server/base/src/test/java/org/apache/accumulo/server/fs/VolumeManagerImplTest.java b/server/base/src/test/java/org/apache/accumulo/server/fs/VolumeManagerImplTest.java
index ac98ace..ab8d22f 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/fs/VolumeManagerImplTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/fs/VolumeManagerImplTest.java
@@ -26,7 +26,6 @@
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.TableId;
 import org.apache.accumulo.server.fs.VolumeManager.FileType;
-import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.junit.Before;
@@ -76,7 +75,7 @@
     List<String> volumes = Arrays.asList("file://one/", "file://two/", "file://three/");
     ConfigurationCopy conf = new ConfigurationCopy();
     conf.set(INSTANCE_DFS_URI, volumes.get(0));
-    conf.set(Property.INSTANCE_VOLUMES, StringUtils.join(volumes, ","));
+    conf.set(Property.INSTANCE_VOLUMES, String.join(",", volumes));
     conf.set(Property.GENERAL_VOLUME_CHOOSER,
         "org.apache.accumulo.server.fs.ChooserThatDoesntExist");
     thrown.expect(RuntimeException.class);
@@ -133,7 +132,7 @@
     List<String> volumes = Arrays.asList("file://one/", "file://two/", "file://three/");
     ConfigurationCopy conf = new ConfigurationCopy();
     conf.set(INSTANCE_DFS_URI, volumes.get(0));
-    conf.set(Property.INSTANCE_VOLUMES, StringUtils.join(volumes, ","));
+    conf.set(Property.INSTANCE_VOLUMES, String.join(",", volumes));
     conf.set(Property.GENERAL_VOLUME_CHOOSER, WrongVolumeChooser.class.getName());
     thrown.expect(RuntimeException.class);
     VolumeManager vm = VolumeManagerImpl.get(conf, hadoopConf);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/BaseHostRegexTableLoadBalancerTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/BaseHostRegexTableLoadBalancerTest.java
index f53cd62..69fab92 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/BaseHostRegexTableLoadBalancerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/BaseHostRegexTableLoadBalancerTest.java
@@ -23,7 +23,6 @@
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
-import java.util.Map.Entry;
 import java.util.Properties;
 import java.util.SortedMap;
 import java.util.SortedSet;
@@ -89,20 +88,22 @@
         TestDefaultBalancer.class.getName());
   }
 
-  private static SiteConfiguration siteConfg = new SiteConfiguration();
+  private static SiteConfiguration siteConfg = SiteConfiguration.auto();
 
   protected static class TestServerConfigurationFactory extends ServerConfigurationFactory {
 
     final ServerContext context;
+    private ConfigurationCopy config;
 
     public TestServerConfigurationFactory(ServerContext context) {
       super(context, siteConfg);
       this.context = context;
+      this.config = new ConfigurationCopy(DEFAULT_TABLE_PROPERTIES);
     }
 
     @Override
     public synchronized AccumuloConfiguration getSystemConfiguration() {
-      return new ConfigurationCopy(DEFAULT_TABLE_PROPERTIES);
+      return config;
     }
 
     @Override
@@ -114,16 +115,12 @@
       return new TableConfiguration(context, tableId, dummyConf) {
         @Override
         public String get(Property property) {
-          return DEFAULT_TABLE_PROPERTIES.get(property.name());
+          return getSystemConfiguration().get(property.name());
         }
 
         @Override
         public void getProperties(Map<String,String> props, Predicate<String> filter) {
-          for (Entry<String,String> e : DEFAULT_TABLE_PROPERTIES.entrySet()) {
-            if (filter.test(e.getKey())) {
-              props.put(e.getKey(), e.getValue());
-            }
-          }
+          getSystemConfiguration().getProperties(props, filter);
         }
 
         @Override
@@ -256,10 +253,11 @@
         && (host.equals("192.168.0.6") || host.equals("192.168.0.7") || host.equals("192.168.0.8")
             || host.equals("192.168.0.9") || host.equals("192.168.0.10"))) {
       return true;
-    } else
+    } else {
       return tid.equals("3") && (host.equals("192.168.0.11") || host.equals("192.168.0.12")
           || host.equals("192.168.0.13") || host.equals("192.168.0.14")
           || host.equals("192.168.0.15"));
+    }
   }
 
   protected String idToTableName(TableId id) {
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerReconfigurationTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerReconfigurationTest.java
index 40a28c1..105b6d5 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerReconfigurationTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerReconfigurationTest.java
@@ -31,6 +31,7 @@
 import java.util.Map.Entry;
 import java.util.Set;
 
+import org.apache.accumulo.core.conf.ConfigurationCopy;
 import org.apache.accumulo.core.data.TableId;
 import org.apache.accumulo.core.dataImpl.KeyExtent;
 import org.apache.accumulo.core.tabletserver.thrift.TabletStats;
@@ -89,9 +90,10 @@
     this.balance(Collections.unmodifiableSortedMap(allTabletServers), migrations, migrationsOut);
     assertEquals(0, migrationsOut.size());
     // Change property, simulate call by TableConfWatcher
-    DEFAULT_TABLE_PROPERTIES
-        .put(HostRegexTableLoadBalancer.HOST_BALANCER_PREFIX + BAR.getTableName(), "r01.*");
-    this.propertiesChanged();
+
+    ((ConfigurationCopy) factory.getSystemConfiguration())
+        .set(HostRegexTableLoadBalancer.HOST_BALANCER_PREFIX + BAR.getTableName(), "r01.*");
+
     // Wait to trigger the out of bounds check and the repool check
     UtilWaitThread.sleep(10000);
     this.balance(Collections.unmodifiableSortedMap(allTabletServers), migrations, migrationsOut);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerTest.java
index ab9cad7..7d3dd41 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerTest.java
@@ -33,14 +33,10 @@
 import java.util.Map.Entry;
 import java.util.Set;
 import java.util.SortedMap;
-import java.util.function.Predicate;
 import java.util.regex.Pattern;
 
-import org.apache.accumulo.core.clientImpl.Namespace;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.ConfigurationCopy;
-import org.apache.accumulo.core.conf.DefaultConfiguration;
-import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.TableId;
 import org.apache.accumulo.core.dataImpl.KeyExtent;
 import org.apache.accumulo.core.dataImpl.thrift.TKeyExtent;
@@ -48,9 +44,7 @@
 import org.apache.accumulo.core.tabletserver.thrift.TabletStats;
 import org.apache.accumulo.fate.util.UtilWaitThread;
 import org.apache.accumulo.server.ServerContext;
-import org.apache.accumulo.server.conf.NamespaceConfiguration;
 import org.apache.accumulo.server.conf.ServerConfigurationFactory;
-import org.apache.accumulo.server.conf.TableConfiguration;
 import org.apache.accumulo.server.master.state.TServerInstance;
 import org.apache.accumulo.server.master.state.TabletMigration;
 import org.junit.Test;
@@ -84,15 +78,6 @@
     assertEquals(Pattern.compile("r01.*").pattern(), patterns.get(FOO.getTableName()).pattern());
     assertTrue(patterns.containsKey(BAR.getTableName()));
     assertEquals(Pattern.compile("r02.*").pattern(), patterns.get(BAR.getTableName()).pattern());
-    Map<TableId,String> tids = this.getTableIdToTableName();
-    assertEquals(3, tids.size());
-    assertTrue(tids.containsKey(FOO.getId()));
-    assertEquals(FOO.getTableName(), tids.get(FOO.getId()));
-    assertTrue(tids.containsKey(BAR.getId()));
-    assertEquals(BAR.getTableName(), tids.get(BAR.getId()));
-    assertTrue(tids.containsKey(BAZ.getId()));
-    assertEquals(BAZ.getTableName(), tids.get(BAZ.getId()));
-    assertFalse(this.isIpBasedRegex());
   }
 
   @Test
@@ -193,40 +178,13 @@
     ServerContext context = createMockContext();
     replay(context);
     initFactory(new TestServerConfigurationFactory(context) {
-
       @Override
-      public TableConfiguration getTableConfiguration(TableId tableId) {
-        NamespaceConfiguration defaultConf = new NamespaceConfiguration(Namespace.DEFAULT.id(),
-            this.context, DefaultConfiguration.getInstance());
-        return new TableConfiguration(this.context, tableId, defaultConf) {
-          HashMap<String,String> tableProperties = new HashMap<>();
-          {
-            tableProperties
-                .put(HostRegexTableLoadBalancer.HOST_BALANCER_PREFIX + FOO.getTableName(), "r.*");
-            tableProperties.put(
-                HostRegexTableLoadBalancer.HOST_BALANCER_PREFIX + BAR.getTableName(),
-                "r01.*|r02.*");
-          }
-
-          @Override
-          public String get(Property property) {
-            return tableProperties.get(property.name());
-          }
-
-          @Override
-          public void getProperties(Map<String,String> props, Predicate<String> filter) {
-            for (Entry<String,String> e : tableProperties.entrySet()) {
-              if (filter.test(e.getKey())) {
-                props.put(e.getKey(), e.getValue());
-              }
-            }
-          }
-
-          @Override
-          public long getUpdateCount() {
-            return 0;
-          }
-        };
+      public synchronized AccumuloConfiguration getSystemConfiguration() {
+        HashMap<String,String> props = new HashMap<>(DEFAULT_TABLE_PROPERTIES);
+        props.put(HostRegexTableLoadBalancer.HOST_BALANCER_PREFIX + FOO.getTableName(), "r.*");
+        props.put(HostRegexTableLoadBalancer.HOST_BALANCER_PREFIX + BAR.getTableName(),
+            "r01.*|r02.*");
+        return new ConfigurationCopy(props);
       }
     });
     Map<String,SortedMap<TServerInstance,TabletServerStatus>> groups =
@@ -281,44 +239,12 @@
         HashMap<String,String> props = new HashMap<>();
         props.put(HostRegexTableLoadBalancer.HOST_BALANCER_OOB_CHECK_KEY, "30s");
         props.put(HostRegexTableLoadBalancer.HOST_BALANCER_REGEX_USING_IPS_KEY, "true");
+        props.put(HostRegexTableLoadBalancer.HOST_BALANCER_PREFIX + FOO.getTableName(),
+            "192\\.168\\.0\\.[1-5]");
+        props.put(HostRegexTableLoadBalancer.HOST_BALANCER_PREFIX + BAR.getTableName(),
+            "192\\.168\\.0\\.[6-9]|192\\.168\\.0\\.10");
         return new ConfigurationCopy(props);
       }
-
-      @Override
-      public TableConfiguration getTableConfiguration(TableId tableId) {
-        NamespaceConfiguration defaultConf = new NamespaceConfiguration(Namespace.DEFAULT.id(),
-            this.context, DefaultConfiguration.getInstance());
-        return new TableConfiguration(context, tableId, defaultConf) {
-          HashMap<String,String> tableProperties = new HashMap<>();
-          {
-            tableProperties.put(
-                HostRegexTableLoadBalancer.HOST_BALANCER_PREFIX + FOO.getTableName(),
-                "192\\.168\\.0\\.[1-5]");
-            tableProperties.put(
-                HostRegexTableLoadBalancer.HOST_BALANCER_PREFIX + BAR.getTableName(),
-                "192\\.168\\.0\\.[6-9]|192\\.168\\.0\\.10");
-          }
-
-          @Override
-          public String get(Property property) {
-            return tableProperties.get(property.name());
-          }
-
-          @Override
-          public void getProperties(Map<String,String> props, Predicate<String> filter) {
-            for (Entry<String,String> e : tableProperties.entrySet()) {
-              if (filter.test(e.getKey())) {
-                props.put(e.getKey(), e.getValue());
-              }
-            }
-          }
-
-          @Override
-          public long getUpdateCount() {
-            return 0;
-          }
-        };
-      }
     });
     assertTrue(isIpBasedRegex());
     Map<String,SortedMap<TServerInstance,TabletServerStatus>> groups =
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/TableLoadBalancerTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/TableLoadBalancerTest.java
index cdff501..d5616ca 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/TableLoadBalancerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/TableLoadBalancerTest.java
@@ -50,12 +50,9 @@
 import org.easymock.EasyMock;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-
 public class TableLoadBalancerTest {
 
-  private static Map<String,String> TABLE_ID_MAP =
-      ImmutableMap.of("t1", "a1", "t2", "b12", "t3", "c4");
+  private static Map<String,String> TABLE_ID_MAP = Map.of("t1", "a1", "t2", "b12", "t3", "c4");
 
   private static TServerInstance mkts(String address, String session) {
     return new TServerInstance(HostAndPort.fromParts(address, 1234), session);
@@ -151,7 +148,7 @@
     final ServerContext context = createMockContext();
     replay(context);
     ServerConfigurationFactory confFactory =
-        new ServerConfigurationFactory(context, new SiteConfiguration()) {
+        new ServerConfigurationFactory(context, SiteConfiguration.auto()) {
           @Override
           public TableConfiguration getTableConfiguration(TableId tableId) {
             // create a dummy namespaceConfiguration to satisfy requireNonNull in TableConfiguration
@@ -194,8 +191,9 @@
     movedByTable.put(TableId.of(t2Id), 0);
     movedByTable.put(TableId.of(t3Id), 0);
     for (TabletMigration migration : migrationsOut) {
-      if (migration.oldServer.equals(svr))
+      if (migration.oldServer.equals(svr)) {
         count++;
+      }
       TableId key = migration.tablet.getTableId();
       movedByTable.put(key, movedByTable.get(key) + 1);
     }
diff --git a/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java b/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
index a419840..5a2d7a4 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
@@ -42,7 +42,7 @@
   @Rule
   public TestName test = new TestName();
 
-  private static SiteConfiguration siteConfig = new SiteConfiguration();
+  private static SiteConfiguration siteConfig = SiteConfiguration.auto();
   private String instanceId =
       UUID.nameUUIDFromBytes(new byte[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 0}).toString();
 
diff --git a/server/base/src/test/java/org/apache/accumulo/server/security/UserImpersonationTest.java b/server/base/src/test/java/org/apache/accumulo/server/security/UserImpersonationTest.java
index 5af844c..bd2e0c6 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/security/UserImpersonationTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/security/UserImpersonationTest.java
@@ -25,7 +25,6 @@
 
 import java.util.HashMap;
 import java.util.Map;
-import java.util.Map.Entry;
 import java.util.function.Predicate;
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
@@ -38,7 +37,6 @@
 import org.junit.Test;
 
 import com.google.common.base.Joiner;
-import com.google.common.collect.ImmutableMap;
 
 public class UserImpersonationTest {
 
@@ -67,18 +65,21 @@
     };
   }
 
-  void setValidHosts(String... hosts) {
+  private void setValidHosts(String... hosts) {
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_HOST_IMPERSONATION.getKey(),
         Joiner.on(';').join(hosts));
   }
 
-  void setValidUsers(Map<String,String> remoteToAllowedUsers) {
+  // preserve order
+  private void setValidUsers(String... remoteToAllowedUsers) {
+    // make sure args come in pairs (even), mapping remote servers to corresponding users
+    assertEquals(0, remoteToAllowedUsers.length % 2);
     StringBuilder sb = new StringBuilder();
-    for (Entry<String,String> entry : remoteToAllowedUsers.entrySet()) {
+    for (int v = 1; v < remoteToAllowedUsers.length; v += 2) {
       if (sb.length() > 0) {
         sb.append(";");
       }
-      sb.append(entry.getKey()).append(":").append(entry.getValue());
+      sb.append(remoteToAllowedUsers[v - 1]).append(":").append(remoteToAllowedUsers[v]);
     }
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_USER_IMPERSONATION, sb.toString());
   }
@@ -87,7 +88,7 @@
   public void testAnyUserAndHosts() {
     String server = "server";
     setValidHosts("*");
-    setValidUsers(ImmutableMap.of(server, "*"));
+    setValidUsers(server, "*");
     UserImpersonation impersonation = new UserImpersonation(conf);
 
     UsersWithHosts uwh = impersonation.get(server);
@@ -103,7 +104,7 @@
   @Test
   public void testNoHostByDefault() {
     String server = "server";
-    setValidUsers(ImmutableMap.of(server, "*"));
+    setValidUsers(server, "*");
     UserImpersonation impersonation = new UserImpersonation(conf);
 
     UsersWithHosts uwh = impersonation.get(server);
@@ -130,7 +131,7 @@
   public void testSingleUserAndHost() {
     String server = "server", host = "single_host.domain.com", client = "single_client";
     setValidHosts(host);
-    setValidUsers(ImmutableMap.of(server, client));
+    setValidUsers(server, client);
     UserImpersonation impersonation = new UserImpersonation(conf);
 
     UsersWithHosts uwh = impersonation.get(server);
@@ -153,7 +154,7 @@
   public void testMultipleExplicitUsers() {
     String server = "server", client1 = "client1", client2 = "client2", client3 = "client3";
     setValidHosts("*");
-    setValidUsers(ImmutableMap.of(server, Joiner.on(',').join(client1, client2, client3)));
+    setValidUsers(server, Joiner.on(',').join(client1, client2, client3));
     UserImpersonation impersonation = new UserImpersonation(conf);
 
     UsersWithHosts uwh = impersonation.get(server);
@@ -175,7 +176,7 @@
   public void testMultipleExplicitHosts() {
     String server = "server", host1 = "host1", host2 = "host2", host3 = "host3";
     setValidHosts(Joiner.on(',').join(host1, host2, host3));
-    setValidUsers(ImmutableMap.of(server, "*"));
+    setValidUsers(server, "*");
     UserImpersonation impersonation = new UserImpersonation(conf);
 
     UsersWithHosts uwh = impersonation.get(server);
@@ -198,7 +199,7 @@
     String server = "server", host1 = "host1", host2 = "host2", host3 = "host3",
         client1 = "client1", client2 = "client2", client3 = "client3";
     setValidHosts(Joiner.on(',').join(host1, host2, host3));
-    setValidUsers(ImmutableMap.of(server, Joiner.on(',').join(client1, client2, client3)));
+    setValidUsers(server, Joiner.on(',').join(client1, client2, client3));
     UserImpersonation impersonation = new UserImpersonation(conf);
 
     UsersWithHosts uwh = impersonation.get(server);
@@ -228,8 +229,7 @@
     // server1 can impersonate client1 and client2 from host1 or host2
     // server2 can impersonate only client3 from host3
     setValidHosts(Joiner.on(',').join(host1, host2), host3);
-    setValidUsers(
-        ImmutableMap.of(server1, Joiner.on(',').join(client1, client2), server2, client3));
+    setValidUsers(server1, Joiner.on(',').join(client1, client2), server2, client3);
     UserImpersonation impersonation = new UserImpersonation(conf);
 
     UsersWithHosts uwh = impersonation.get(server1);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/tablets/LogicalTimeTest.java b/server/base/src/test/java/org/apache/accumulo/server/tablets/LogicalTimeTest.java
index defb7da..ae560e3 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/tablets/LogicalTimeTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/tablets/LogicalTimeTest.java
@@ -24,6 +24,7 @@
 import java.util.List;
 
 import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.metadata.schema.MetadataTime;
 import org.apache.accumulo.server.data.ServerMutation;
 import org.apache.accumulo.server.tablets.TabletTime.LogicalTime;
 import org.junit.Before;
@@ -35,24 +36,25 @@
 
   @Before
   public void setUp() {
-    ltime = (LogicalTime) TabletTime.getInstance("L1234");
+    MetadataTime mTime = MetadataTime.parse("L1234");
+    ltime = (LogicalTime) TabletTime.getInstance(mTime);
   }
 
   @Test
   public void testGetMetadataValue() {
-    assertEquals("L1234", ltime.getMetadataValue());
+    assertEquals("L1234", ltime.getMetadataTime().encode());
   }
 
   @Test
   public void testUseMaxTimeFromWALog_Update() {
     ltime.useMaxTimeFromWALog(5678L);
-    assertEquals("L5678", ltime.getMetadataValue());
+    assertEquals("L5678", ltime.getMetadataTime().encode());
   }
 
   @Test
   public void testUseMaxTimeFromWALog_NoUpdate() {
     ltime.useMaxTimeFromWALog(0L);
-    assertEquals("L1234", ltime.getMetadataValue());
+    assertEquals("L1234", ltime.getMetadataTime().encode());
   }
 
   @Test
diff --git a/server/base/src/test/java/org/apache/accumulo/server/tablets/MillisTimeTest.java b/server/base/src/test/java/org/apache/accumulo/server/tablets/MillisTimeTest.java
index 1fee2a0..272cb85 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/tablets/MillisTimeTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/tablets/MillisTimeTest.java
@@ -42,19 +42,19 @@
 
   @Test
   public void testGetMetadataValue() {
-    assertEquals("M1234", mtime.getMetadataValue());
+    assertEquals("M1234", mtime.getMetadataTime().encode());
   }
 
   @Test
   public void testUseMaxTimeFromWALog_Yes() {
     mtime.useMaxTimeFromWALog(5678L);
-    assertEquals("M5678", mtime.getMetadataValue());
+    assertEquals("M5678", mtime.getMetadataTime().encode());
   }
 
   @Test
   public void testUseMaxTimeFromWALog_No() {
     mtime.useMaxTimeFromWALog(0L);
-    assertEquals("M1234", mtime.getMetadataValue());
+    assertEquals("M1234", mtime.getMetadataTime().encode());
   }
 
   @Test
diff --git a/server/base/src/test/java/org/apache/accumulo/server/tablets/TabletTimeTest.java b/server/base/src/test/java/org/apache/accumulo/server/tablets/TabletTimeTest.java
index 1bbf7c2..60155ab 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/tablets/TabletTimeTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/tablets/TabletTimeTest.java
@@ -23,6 +23,7 @@
 import static org.junit.Assert.assertNull;
 
 import org.apache.accumulo.core.client.admin.TimeType;
+import org.apache.accumulo.core.metadata.schema.MetadataTime;
 import org.apache.accumulo.server.data.ServerMutation;
 import org.apache.accumulo.server.tablets.TabletTime.LogicalTime;
 import org.apache.accumulo.server.tablets.TabletTime.MillisTime;
@@ -32,6 +33,10 @@
 public class TabletTimeTest {
   private static final long TIME = 1234L;
   private MillisTime mtime;
+  private static final MetadataTime m1234 = new MetadataTime(1234, TimeType.MILLIS);
+  private static final MetadataTime m5678 = new MetadataTime(5678, TimeType.MILLIS);
+  private static final MetadataTime l1234 = new MetadataTime(1234, TimeType.LOGICAL);
+  private static final MetadataTime l5678 = new MetadataTime(5678, TimeType.LOGICAL);
 
   @Before
   public void setUp() {
@@ -39,12 +44,6 @@
   }
 
   @Test
-  public void testGetTimeID() {
-    assertEquals('L', TabletTime.getTimeID(TimeType.LOGICAL));
-    assertEquals('M', TabletTime.getTimeID(TimeType.MILLIS));
-  }
-
-  @Test
   public void testSetSystemTimes() {
     ServerMutation m = createMock(ServerMutation.class);
     long lastCommitTime = 1234L;
@@ -56,101 +55,58 @@
 
   @Test
   public void testGetInstance_Logical() {
-    TabletTime t = TabletTime.getInstance("L1234");
+    TabletTime t = TabletTime.getInstance(new MetadataTime(1234, TimeType.LOGICAL));
     assertEquals(LogicalTime.class, t.getClass());
-    assertEquals("L1234", t.getMetadataValue());
+    assertEquals("L1234", t.getMetadataTime().encode());
   }
 
   @Test
   public void testGetInstance_Millis() {
-    TabletTime t = TabletTime.getInstance("M1234");
+    TabletTime t = TabletTime.getInstance(new MetadataTime(1234, TimeType.MILLIS));
     assertEquals(MillisTime.class, t.getClass());
-    assertEquals("M1234", t.getMetadataValue());
-  }
-
-  @Test(expected = IllegalArgumentException.class)
-  public void testGetInstance_InvalidType() {
-    TabletTime.getInstance("X1234");
-  }
-
-  @Test(expected = NumberFormatException.class)
-  public void testGetInstance_Logical_ParseFailure() {
-    TabletTime.getInstance("LABCD");
-  }
-
-  @Test(expected = NumberFormatException.class)
-  public void testGetInstance_Millis_ParseFailure() {
-    TabletTime.getInstance("MABCD");
+    assertEquals("M1234", t.getMetadataTime().encode());
   }
 
   @Test
   public void testMaxMetadataTime_Logical() {
-    assertEquals("L5678", TabletTime.maxMetadataTime("L1234", "L5678"));
-    assertEquals("L5678", TabletTime.maxMetadataTime("L5678", "L1234"));
-    assertEquals("L5678", TabletTime.maxMetadataTime("L5678", "L5678"));
+    assertEquals(l5678, TabletTime.maxMetadataTime(l1234, l5678));
+    assertEquals(l5678, TabletTime.maxMetadataTime(l5678, l1234));
+    assertEquals(l5678, TabletTime.maxMetadataTime(l5678, l5678));
   }
 
   @Test
   public void testMaxMetadataTime_Millis() {
-    assertEquals("M5678", TabletTime.maxMetadataTime("M1234", "M5678"));
-    assertEquals("M5678", TabletTime.maxMetadataTime("M5678", "M1234"));
-    assertEquals("M5678", TabletTime.maxMetadataTime("M5678", "M5678"));
+    assertEquals(m5678, TabletTime.maxMetadataTime(m1234, m5678));
+    assertEquals(m5678, TabletTime.maxMetadataTime(m5678, m1234));
+    assertEquals(m5678, TabletTime.maxMetadataTime(m5678, m5678));
   }
 
   @Test
   public void testMaxMetadataTime_Null1() {
-    assertEquals("L5678", TabletTime.maxMetadataTime(null, "L5678"));
-    assertEquals("M5678", TabletTime.maxMetadataTime(null, "M5678"));
+    assertEquals(l5678, TabletTime.maxMetadataTime(null, l5678));
+    assertEquals(m5678, TabletTime.maxMetadataTime(null, m5678));
   }
 
   @Test
   public void testMaxMetadataTime_Null2() {
-    assertEquals("L5678", TabletTime.maxMetadataTime("L5678", null));
-    assertEquals("M5678", TabletTime.maxMetadataTime("M5678", null));
+    assertEquals(l5678, TabletTime.maxMetadataTime(l5678, null));
+    assertEquals(m5678, TabletTime.maxMetadataTime(m5678, null));
   }
 
   @Test
   public void testMaxMetadataTime_Null3() {
-    assertNull(TabletTime.maxMetadataTime(null, null));
-  }
-
-  @Test(expected = IllegalArgumentException.class)
-  public void testMaxMetadataTime_Null1_Invalid() {
-    TabletTime.maxMetadataTime(null, "X5678");
-  }
-
-  @Test(expected = IllegalArgumentException.class)
-  public void testMaxMetadataTime_Null2_Invalid() {
-    TabletTime.maxMetadataTime("X5678", null);
-  }
-
-  @Test(expected = IllegalArgumentException.class)
-  public void testMaxMetadataTime_Invalid1() {
-    TabletTime.maxMetadataTime("X1234", "L5678");
-  }
-
-  @Test(expected = IllegalArgumentException.class)
-  public void testMaxMetadataTime_Invalid2() {
-    TabletTime.maxMetadataTime("L1234", "X5678");
+    MetadataTime nullTime = null;
+    assertNull(TabletTime.maxMetadataTime(nullTime, nullTime));
   }
 
   @Test(expected = IllegalArgumentException.class)
   public void testMaxMetadataTime_DifferentTypes1() {
-    TabletTime.maxMetadataTime("L1234", "M5678");
+    TabletTime.maxMetadataTime(l1234, m5678);
   }
 
   @Test(expected = IllegalArgumentException.class)
   public void testMaxMetadataTime_DifferentTypes2() {
-    TabletTime.maxMetadataTime("X1234", "Y5678");
+    TabletTime.maxMetadataTime(m1234, l5678);
   }
 
-  @Test(expected = NumberFormatException.class)
-  public void testMaxMetadataTime_ParseFailure1() {
-    TabletTime.maxMetadataTime("L1234", "LABCD");
-  }
-
-  @Test(expected = NumberFormatException.class)
-  public void testMaxMetadataTime_ParseFailure2() {
-    TabletTime.maxMetadataTime("LABCD", "L5678");
-  }
 }
diff --git a/server/base/src/test/java/org/apache/accumulo/server/util/TServerUtilsTest.java b/server/base/src/test/java/org/apache/accumulo/server/util/TServerUtilsTest.java
index 4f855db..4e40bb9 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/util/TServerUtilsTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/util/TServerUtilsTest.java
@@ -62,7 +62,7 @@
     private ConfigurationCopy conf = null;
 
     public TestServerConfigurationFactory(ServerContext context) {
-      super(context, new SiteConfiguration());
+      super(context, SiteConfiguration.auto());
       conf = new ConfigurationCopy(DefaultConfiguration.getInstance());
     }
 
diff --git a/server/gc/pom.xml b/server/gc/pom.xml
index 92170aa..e7cede0 100644
--- a/server/gc/pom.xml
+++ b/server/gc/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-gc</artifactId>
@@ -28,10 +28,6 @@
   <description>The garbage collecting server for Apache Accumulo to clean up unused files.</description>
   <dependencies>
     <dependency>
-      <groupId>com.beust</groupId>
-      <artifactId>jcommander</artifactId>
-    </dependency>
-    <dependency>
       <groupId>com.google.auto.service</groupId>
       <artifactId>auto-service</artifactId>
       <optional>true</optional>
diff --git a/server/gc/src/main/java/org/apache/accumulo/gc/GarbageCollectWriteAheadLogs.java b/server/gc/src/main/java/org/apache/accumulo/gc/GarbageCollectWriteAheadLogs.java
index e671219..2858bf6 100644
--- a/server/gc/src/main/java/org/apache/accumulo/gc/GarbageCollectWriteAheadLogs.java
+++ b/server/gc/src/main/java/org/apache/accumulo/gc/GarbageCollectWriteAheadLogs.java
@@ -92,7 +92,7 @@
     this.useTrash = useTrash;
     this.liveServers = liveServers;
     this.walMarker = new WalStateManager(context);
-    this.store = () -> Iterators.concat(new ZooTabletStateStore(context).iterator(),
+    this.store = () -> Iterators.concat(new ZooTabletStateStore(context.getAmple()).iterator(),
         new RootTabletStateStore(context).iterator(), new MetaDataStateStore(context).iterator());
   }
 
diff --git a/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java b/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java
index 4932c73..16d4a78 100644
--- a/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java
+++ b/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java
@@ -16,10 +16,14 @@
  */
 package org.apache.accumulo.gc;
 
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.DIR;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.FILES;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.SCANS;
 import static org.apache.accumulo.fate.util.UtilWaitThread.sleepUninterruptibly;
 
 import java.io.FileNotFoundException;
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.Collections;
 import java.util.Iterator;
 import java.util.List;
@@ -33,21 +37,13 @@
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloClient;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.IsolatedScanner;
-import org.apache.accumulo.core.client.MutationsRejectedException;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.clientImpl.Tables;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.PartialKey;
-import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.TableId;
-import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.gc.thrift.GCMonitorService.Iface;
 import org.apache.accumulo.core.gc.thrift.GCMonitorService.Processor;
 import org.apache.accumulo.core.gc.thrift.GCStatus;
@@ -55,6 +51,8 @@
 import org.apache.accumulo.core.master.state.tables.TableState;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.Ample;
+import org.apache.accumulo.core.metadata.schema.Ample.DataLevel;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.metadata.schema.TabletMetadata;
 import org.apache.accumulo.core.metadata.schema.TabletsMetadata;
@@ -73,6 +71,8 @@
 import org.apache.accumulo.fate.zookeeper.ZooLock;
 import org.apache.accumulo.fate.zookeeper.ZooLock.LockLossReason;
 import org.apache.accumulo.fate.zookeeper.ZooLock.LockWatcher;
+import org.apache.accumulo.gc.metrics.GcCycleMetrics;
+import org.apache.accumulo.gc.metrics.GcMetricsFactory;
 import org.apache.accumulo.gc.replication.CloseWriteAheadLogReferences;
 import org.apache.accumulo.server.AbstractServer;
 import org.apache.accumulo.server.ServerConstants;
@@ -89,7 +89,6 @@
 import org.apache.accumulo.server.util.Halt;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
 import org.apache.htrace.Trace;
 import org.apache.htrace.TraceScope;
 import org.apache.htrace.impl.ProbabilitySampler;
@@ -97,7 +96,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.beust.jcommander.Parameter;
 import com.google.common.collect.Iterators;
 import com.google.common.collect.Maps;
 import com.google.protobuf.InvalidProtocolBufferException;
@@ -108,19 +106,6 @@
 // the ZK lock is acquired. The server is only for metrics, there are no concerns about clients
 // using the service before the lock is acquired.
 public class SimpleGarbageCollector extends AbstractServer implements Iface {
-  private static final Text EMPTY_TEXT = new Text();
-
-  /**
-   * Options for the garbage collector.
-   */
-  static class GCOpts extends ServerOpts {
-    @Parameter(names = {"-v", "--verbose"},
-        description = "extra information will get printed to stdout also")
-    boolean verbose = false;
-    @Parameter(names = {"-s", "--safemode"}, description = "safe mode will not delete files")
-    boolean safeMode = false;
-  }
-
   /**
    * A fraction representing how much of the JVM's available memory should be used for gathering
    * candidates.
@@ -129,31 +114,38 @@
 
   private static final Logger log = LoggerFactory.getLogger(SimpleGarbageCollector.class);
 
-  private GCOpts opts;
   private ZooLock lock;
 
   private GCStatus status =
       new GCStatus(new GcCycleStats(), new GcCycleStats(), new GcCycleStats(), new GcCycleStats());
 
+  private GcCycleMetrics gcCycleMetrics = new GcCycleMetrics();
+
   public static void main(String[] args) throws Exception {
-    try (SimpleGarbageCollector gc = new SimpleGarbageCollector(new GCOpts(), args)) {
+    try (SimpleGarbageCollector gc = new SimpleGarbageCollector(new ServerOpts(), args)) {
       gc.runServer();
     }
   }
 
-  SimpleGarbageCollector(GCOpts opts, String[] args) {
+  SimpleGarbageCollector(ServerOpts opts, String[] args) {
     super("gc", opts, args);
-    this.opts = opts;
 
     final AccumuloConfiguration conf = getConfiguration();
 
+    boolean gcMetricsRegistered = new GcMetricsFactory(conf).register(this);
+
+    if (gcMetricsRegistered) {
+      log.info("gc metrics modules registered with metrics system");
+    } else {
+      log.info("Failed to register gc metrics module");
+    }
+
     final long gcDelay = conf.getTimeInMillis(Property.GC_CYCLE_DELAY);
     final String useFullCompaction = conf.get(Property.GC_USE_FULL_COMPACTION);
 
     log.info("start delay: {} milliseconds", getStartDelay());
     log.info("time delay: {} milliseconds", gcDelay);
-    log.info("safemode: {}", opts.safeMode);
-    log.info("verbose: {}", opts.verbose);
+    log.info("safemode: {}", inSafeMode());
     log.info("memory threshold: {} of {} bytes", CANDIDATE_MEMORY_PERCENTAGE,
         Runtime.getRuntime().maxMemory());
     log.info("delete threads: {}", getNumDeleteThreads());
@@ -187,34 +179,34 @@
     return getConfiguration().getCount(Property.GC_DELETE_THREADS);
   }
 
+  /**
+   * Checks if safemode is set - files will not be deleted.
+   *
+   * @return number of delete threads
+   */
+  boolean inSafeMode() {
+    return getConfiguration().getBoolean(Property.GC_SAFEMODE);
+  }
+
   private class GCEnv implements GarbageCollectionEnvironment {
 
-    private String tableName;
+    private DataLevel level;
 
-    GCEnv(String tableName) {
-      this.tableName = tableName;
+    GCEnv(Ample.DataLevel level) {
+      this.level = level;
     }
 
     @Override
     public boolean getCandidates(String continuePoint, List<String> result)
         throws TableNotFoundException {
-      // want to ensure GC makes progress... if the 1st N deletes are stable and we keep processing
-      // them,
-      // then will never inspect deletes after N
-      Range range = MetadataSchema.DeletesSection.getRange();
-      if (continuePoint != null && !continuePoint.isEmpty()) {
-        String continueRow = MetadataSchema.DeletesSection.getRowPrefix() + continuePoint;
-        range = new Range(new Key(continueRow).followingKey(PartialKey.ROW), true,
-            range.getEndKey(), range.isEndKeyInclusive());
-      }
 
-      Scanner scanner = getContext().createScanner(tableName, Authorizations.EMPTY);
-      scanner.setRange(range);
+      Iterator<String> candidates = getContext().getAmple().getGcCandidates(level, continuePoint);
+
       result.clear();
-      // find candidates for deletion; chop off the prefix
-      for (Entry<Key,Value> entry : scanner) {
-        String cand = entry.getKey().getRow().toString()
-            .substring(MetadataSchema.DeletesSection.getRowPrefix().length());
+
+      while (candidates.hasNext()) {
+        String cand = candidates.next();
+
         result.add(cand);
         if (almostOutOfMemory(Runtime.getRuntime())) {
           log.info("List of delete candidates has exceeded the memory"
@@ -228,9 +220,14 @@
 
     @Override
     public Iterator<String> getBlipIterator() throws TableNotFoundException {
+
+      if (level == DataLevel.ROOT) {
+        return Collections.<String>emptySet().iterator();
+      }
+
       @SuppressWarnings("resource")
       IsolatedScanner scanner =
-          new IsolatedScanner(getContext().createScanner(tableName, Authorizations.EMPTY));
+          new IsolatedScanner(getContext().createScanner(level.metaTable(), Authorizations.EMPTY));
 
       scanner.setRange(MetadataSchema.BlipSection.getRange());
 
@@ -241,8 +238,15 @@
     @Override
     public Stream<Reference> getReferences() {
 
-      Stream<TabletMetadata> tabletStream = TabletsMetadata.builder().scanTable(tableName)
-          .checkConsistency().fetchDir().fetchFiles().fetchScans().build(getContext()).stream();
+      Stream<TabletMetadata> tabletStream;
+
+      if (level == DataLevel.ROOT) {
+        tabletStream =
+            Stream.of(getContext().getAmple().readTablet(RootTable.EXTENT, DIR, FILES, SCANS));
+      } else {
+        tabletStream = TabletsMetadata.builder().scanTable(level.metaTable()).checkConsistency()
+            .fetch(DIR, FILES, SCANS).build(getContext()).stream();
+      }
 
       Stream<Reference> refStream = tabletStream.flatMap(tm -> {
         Stream<Reference> refs = Stream.concat(tm.getFiles().stream(), tm.getScans().stream())
@@ -264,13 +268,13 @@
     @Override
     public void delete(SortedMap<String,String> confirmedDeletes) throws TableNotFoundException {
       final VolumeManager fs = getContext().getVolumeManager();
+      var metadataLocation = level == DataLevel.ROOT
+          ? getContext().getZooKeeperRoot() + " for " + RootTable.NAME : level.metaTable();
 
-      if (opts.safeMode) {
-        if (opts.verbose) {
-          System.out.println("SAFEMODE: There are " + confirmedDeletes.size()
-              + " data file candidates marked for deletion.%n"
-              + "          Examine the log files to identify them.%n");
-        }
+      if (inSafeMode()) {
+        System.out.println("SAFEMODE: There are " + confirmedDeletes.size()
+            + " data file candidates marked for deletion in " + metadataLocation + ".\n"
+            + "          Examine the log files to identify them.\n");
         log.info("SAFEMODE: Listing all data file candidates for deletion");
         for (String s : confirmedDeletes.values()) {
           log.info("SAFEMODE: {}", s);
@@ -279,14 +283,13 @@
         return;
       }
 
-      AccumuloClient c = getContext();
-      BatchWriter writer = c.createBatchWriter(tableName, new BatchWriterConfig());
-
       // when deleting a dir and all files in that dir, only need to delete the dir
       // the dir will sort right before the files... so remove the files in this case
       // to minimize namenode ops
       Iterator<Entry<String,String>> cdIter = confirmedDeletes.entrySet().iterator();
 
+      List<String> processedDeletes = Collections.synchronizedList(new ArrayList<String>());
+
       String lastDir = null;
       while (cdIter.hasNext()) {
         Entry<String,String> entry = cdIter.next();
@@ -298,11 +301,7 @@
         } else if (lastDir != null) {
           if (absPath.startsWith(lastDir)) {
             log.debug("Ignoring {} because {} exist", entry.getValue(), lastDir);
-            try {
-              putMarkerDeleteMutation(entry.getValue(), writer);
-            } catch (MutationsRejectedException e) {
-              throw new RuntimeException(e);
-            }
+            processedDeletes.add(entry.getValue());
             cdIter.remove();
           } else {
             lastDir = null;
@@ -310,8 +309,6 @@
         }
       }
 
-      final BatchWriter finalWriter = writer;
-
       ExecutorService deleteThreadPool =
           Executors.newFixedThreadPool(getNumDeleteThreads(), new NamingThreadFactory("deleting"));
 
@@ -381,8 +378,8 @@
 
             // proceed to clearing out the flags for successful deletes and
             // non-existent files
-            if (removeFlag && finalWriter != null) {
-              putMarkerDeleteMutation(delete, finalWriter);
+            if (removeFlag) {
+              processedDeletes.add(delete);
             }
           } catch (Exception e) {
             log.error("{}", e.getMessage(), e);
@@ -401,13 +398,7 @@
         log.error("{}", e1.getMessage(), e1);
       }
 
-      if (writer != null) {
-        try {
-          writer.close();
-        } catch (MutationsRejectedException e) {
-          log.error("Problem removing entries from the metadata table: ", e);
-        }
-      }
+      getContext().getAmple().deleteGcCandidates(level, processedDeletes);
     }
 
     @Override
@@ -516,8 +507,9 @@
 
             status.current.started = System.currentTimeMillis();
 
-            new GarbageCollectionAlgorithm().collect(new GCEnv(RootTable.NAME));
-            new GarbageCollectionAlgorithm().collect(new GCEnv(MetadataTable.NAME));
+            new GarbageCollectionAlgorithm().collect(new GCEnv(DataLevel.ROOT));
+            new GarbageCollectionAlgorithm().collect(new GCEnv(DataLevel.METADATA));
+            new GarbageCollectionAlgorithm().collect(new GCEnv(DataLevel.USER));
 
             log.info("Number of data file candidates for deletion: {}", status.current.candidates);
             log.info("Number of data file candidates still in use: {}", status.current.inUse);
@@ -526,6 +518,7 @@
 
             status.current.finished = System.currentTimeMillis();
             status.last = status.current;
+            gcCycleMetrics.setLastCollect(status.current);
             status.current = new GcCycleStats();
 
           } catch (Exception e) {
@@ -554,6 +547,7 @@
                 new GarbageCollectWriteAheadLogs(getContext(), fs, liveTServerSet, isUsingTrash());
             log.info("Beginning garbage collection of write-ahead logs");
             walogCollector.collect(status);
+            gcCycleMetrics.setLastWalCollect(status.lastLog);
           } catch (Exception e) {
             log.error("{}", e.getMessage(), e);
           }
@@ -583,6 +577,8 @@
 
           final long actionComplete = System.nanoTime();
 
+          gcCycleMetrics.setPostOpDurationNanos(actionComplete - actionStart);
+
           log.info("gc post action {} completed in {} seconds", action, String.format("%.2f",
               (TimeUnit.NANOSECONDS.toMillis(actionComplete - actionStart) / 1000.0)));
 
@@ -591,6 +587,7 @@
         }
       }
       try {
+        gcCycleMetrics.incrementRunCycleCount();
         long gcDelay = getConfiguration().getTimeInMillis(Property.GC_CYCLE_DELAY);
         log.debug("Sleeping for {} milliseconds", gcDelay);
         Thread.sleep(gcDelay);
@@ -661,7 +658,7 @@
       processor = new Processor<>(rpcProxy);
     }
     int[] port = getConfiguration().getPort(Property.GC_PORT);
-    HostAndPort[] addresses = TServerUtils.getHostAndPorts(this.opts.getAddress(), port);
+    HostAndPort[] addresses = TServerUtils.getHostAndPorts(getHostname(), port);
     long maxMessageSize = getConfiguration().getAsBytes(Property.GENERAL_MAX_MESSAGE_SIZE);
     try {
       ServerAddress server = TServerUtils.startTServer(getMetricsSystem(), getConfiguration(),
@@ -692,13 +689,6 @@
         > CANDIDATE_MEMORY_PERCENTAGE * runtime.maxMemory();
   }
 
-  private static void putMarkerDeleteMutation(final String delete, final BatchWriter writer)
-      throws MutationsRejectedException {
-    Mutation m = new Mutation(MetadataSchema.DeletesSection.getRowPrefix() + delete);
-    m.putDelete(EMPTY_TEXT, EMPTY_TEXT);
-    writer.addMutation(m);
-  }
-
   /**
    * Checks if the given string is a directory.
    *
@@ -723,4 +713,8 @@
   public GCStatus getStatus(TInfo info, TCredentials credentials) {
     return status;
   }
+
+  public GcCycleMetrics getGcCycleMetrics() {
+    return gcCycleMetrics;
+  }
 }
diff --git a/server/gc/src/main/java/org/apache/accumulo/gc/metrics/GcCycleMetrics.java b/server/gc/src/main/java/org/apache/accumulo/gc/metrics/GcCycleMetrics.java
new file mode 100644
index 0000000..4caf8c3
--- /dev/null
+++ b/server/gc/src/main/java/org/apache/accumulo/gc/metrics/GcCycleMetrics.java
@@ -0,0 +1,121 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.gc.metrics;
+
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.accumulo.core.gc.thrift.GcCycleStats;
+
+/**
+ * Wrapper class for GcCycleStats so that underlying thrift code in GcCycleStats is not modified.
+ * Provides Thread safe access to the gc cycle stats for metrics reporting.
+ */
+public class GcCycleMetrics {
+
+  private AtomicReference<GcCycleStats> lastCollect = new AtomicReference<>(new GcCycleStats());
+  private AtomicReference<GcCycleStats> lastWalCollect = new AtomicReference<>(new GcCycleStats());
+
+  private AtomicLong postOpDurationNanos = new AtomicLong(0);
+  private AtomicLong runCycleCount = new AtomicLong(0);
+
+  public GcCycleMetrics() {}
+
+  /**
+   * Get the last gc run statistics.
+   *
+   * @return the statistics for the last gc run.
+   */
+  GcCycleStats getLastCollect() {
+    return lastCollect.get();
+  }
+
+  /**
+   * Set the last gc run statistics. Makes a defensive deep copy so that if the gc implementation
+   * modifies the values.
+   *
+   * @param lastCollect
+   *          the last gc run statistics.
+   */
+  public void setLastCollect(final GcCycleStats lastCollect) {
+    this.lastCollect.set(new GcCycleStats(lastCollect));
+  }
+
+  /**
+   * The statistics from the last wal collection.
+   *
+   * @return the last wal collection statistics.
+   */
+  GcCycleStats getLastWalCollect() {
+    return lastWalCollect.get();
+  }
+
+  /**
+   * Set the lost wal collection statistics
+   *
+   * @param lastWalCollect
+   *          last wal statistics
+   */
+  public void setLastWalCollect(final GcCycleStats lastWalCollect) {
+    this.lastWalCollect.set(new GcCycleStats(lastWalCollect));
+  }
+
+  /**
+   * Duration of post operation (compact, flush, none) in nanoseconds.
+   *
+   * @return duration in nanoseconds.
+   */
+  long getPostOpDurationNanos() {
+    return postOpDurationNanos.get();
+  }
+
+  /**
+   * Set the duration of post operation (compact, flush, none) in nanoseconds.
+   *
+   * @param postOpDurationNanos
+   *          the duration, in nanoseconds.
+   */
+  public void setPostOpDurationNanos(long postOpDurationNanos) {
+    this.postOpDurationNanos.set(postOpDurationNanos);
+  }
+
+  /**
+   * The number of gc cycles that have completed since initialization at process start.
+   *
+   * @return current run cycle count.
+   */
+  long getRunCycleCount() {
+    return runCycleCount.get();
+  }
+
+  /**
+   * Increment the gc run cycle count by one.
+   */
+  public void incrementRunCycleCount() {
+    this.runCycleCount.incrementAndGet();
+  }
+
+  @Override
+  public String toString() {
+    final StringBuilder sb = new StringBuilder("GcMetricsValues{");
+    sb.append("lastCollect=").append(lastCollect.get());
+    sb.append(", lastWalCollect=").append(lastWalCollect.get());
+    sb.append(", postOpDuration=").append(postOpDurationNanos.get());
+    sb.append('}');
+    return sb.toString();
+  }
+}
diff --git a/server/gc/src/main/java/org/apache/accumulo/gc/metrics/GcMetrics.java b/server/gc/src/main/java/org/apache/accumulo/gc/metrics/GcMetrics.java
new file mode 100644
index 0000000..b1b9020
--- /dev/null
+++ b/server/gc/src/main/java/org/apache/accumulo/gc/metrics/GcMetrics.java
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.gc.metrics;
+
+import java.util.concurrent.TimeUnit;
+
+import org.apache.accumulo.core.gc.thrift.GcCycleStats;
+import org.apache.accumulo.gc.SimpleGarbageCollector;
+import org.apache.accumulo.server.metrics.Metrics;
+import org.apache.hadoop.metrics2.lib.MetricsRegistry;
+import org.apache.hadoop.metrics2.lib.MutableGaugeLong;
+
+/**
+ * Expected to be instantiated with GcMetricsFactory. This will configure both jmx and the hadoop
+ * metrics systems. The naming convention, in hadoop metrics2, the records will appear as
+ * CONTEXT.RECORD (accgc.AccGcCycleMetrics). The value for context is also used by the configuration
+ * file for sink configuration.
+ */
+public class GcMetrics extends Metrics {
+
+  // use common prefix, different that just gc, to prevent confusion with jvm gc metrics.
+  public static final String GC_METRIC_PREFIX = "AccGc";
+
+  private static final String jmxName = "GarbageCollector";
+  private static final String description = "Accumulo garbage collection metrics";
+  private static final String record = "AccGcCycleMetrics";
+
+  private final SimpleGarbageCollector gc;
+
+  // metrics gauges / counters.
+  private final MutableGaugeLong gcStarted;
+  private final MutableGaugeLong gcFinished;
+  private final MutableGaugeLong gcCandidates;
+  private final MutableGaugeLong gcInUse;
+  private final MutableGaugeLong gcDeleted;
+  private final MutableGaugeLong gcErrors;
+
+  private final MutableGaugeLong walStarted;
+  private final MutableGaugeLong walFinished;
+  private final MutableGaugeLong walCandidates;
+  private final MutableGaugeLong walInUse;
+  private final MutableGaugeLong walDeleted;
+  private final MutableGaugeLong walErrors;
+
+  private final MutableGaugeLong postOpDuration;
+  private final MutableGaugeLong runCycleCount;
+
+  GcMetrics(final SimpleGarbageCollector gc) {
+    super(jmxName + ",sub=" + gc.getClass().getSimpleName(), description, "accgc", record);
+    this.gc = gc;
+
+    MetricsRegistry registry = super.getRegistry();
+
+    gcStarted = registry.newGauge(GC_METRIC_PREFIX + "Started",
+        "Timestamp GC file collection cycle started", 0L);
+    gcFinished = registry.newGauge(GC_METRIC_PREFIX + "Finished",
+        "Timestamp GC file collect cycle finished", 0L);
+    gcCandidates = registry.newGauge(GC_METRIC_PREFIX + "Candidates",
+        "Number of files that are candidates for deletion", 0L);
+    gcInUse =
+        registry.newGauge(GC_METRIC_PREFIX + "InUse", "Number of candidate files still in use", 0L);
+    gcDeleted =
+        registry.newGauge(GC_METRIC_PREFIX + "Deleted", "Number of candidate files deleted", 0L);
+    gcErrors =
+        registry.newGauge(GC_METRIC_PREFIX + "Errors", "Number of candidate deletion errors", 0L);
+
+    walStarted = registry.newGauge(GC_METRIC_PREFIX + "WalStarted",
+        "Timestamp GC WAL collection started", 0L);
+    walFinished = registry.newGauge(GC_METRIC_PREFIX + "WalFinished",
+        "Timestamp GC WAL collection finished", 0L);
+    walCandidates = registry.newGauge(GC_METRIC_PREFIX + "WalCandidates",
+        "Number of files that are candidates for deletion", 0L);
+    walInUse = registry.newGauge(GC_METRIC_PREFIX + "WalInUse",
+        "Number of wal file candidates that are still in use", 0L);
+    walDeleted = registry.newGauge(GC_METRIC_PREFIX + "WalDeleted",
+        "Number of candidate wal files deleted", 0L);
+    walErrors = registry.newGauge(GC_METRIC_PREFIX + "WalErrors",
+        "Number candidate wal file deletion errors", 0L);
+
+    postOpDuration = registry.newGauge(GC_METRIC_PREFIX + "PostOpDuration",
+        "GC metadata table post operation duration in milliseconds", 0L);
+
+    runCycleCount = registry.newGauge(GC_METRIC_PREFIX + "RunCycleCount",
+        "gauge incremented each gc cycle run, rest on process start", 0L);
+
+  }
+
+  @Override
+  protected void prepareMetrics() {
+
+    GcCycleMetrics values = gc.getGcCycleMetrics();
+
+    GcCycleStats lastFileCollect = values.getLastCollect();
+
+    gcStarted.set(lastFileCollect.getStarted());
+    gcFinished.set(lastFileCollect.getFinished());
+    gcCandidates.set(lastFileCollect.getCandidates());
+    gcInUse.set(lastFileCollect.getInUse());
+    gcDeleted.set(lastFileCollect.getDeleted());
+    gcErrors.set(lastFileCollect.getErrors());
+
+    GcCycleStats lastWalCollect = values.getLastWalCollect();
+
+    walStarted.set(lastWalCollect.getStarted());
+    walFinished.set(lastWalCollect.getFinished());
+    walCandidates.set(lastWalCollect.getCandidates());
+    walInUse.set(lastWalCollect.getInUse());
+    walDeleted.set(lastWalCollect.getDeleted());
+    walErrors.set(lastWalCollect.getErrors());
+
+    postOpDuration.set(TimeUnit.NANOSECONDS.toMillis(values.getPostOpDurationNanos()));
+    runCycleCount.set(values.getRunCycleCount());
+  }
+}
diff --git a/server/gc/src/main/java/org/apache/accumulo/gc/metrics/GcMetricsFactory.java b/server/gc/src/main/java/org/apache/accumulo/gc/metrics/GcMetricsFactory.java
new file mode 100644
index 0000000..15a8ad3
--- /dev/null
+++ b/server/gc/src/main/java/org/apache/accumulo/gc/metrics/GcMetricsFactory.java
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.gc.metrics;
+
+import static java.util.Objects.requireNonNull;
+
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.gc.SimpleGarbageCollector;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class GcMetricsFactory {
+
+  private final static Logger log = LoggerFactory.getLogger(GcMetricsFactory.class);
+
+  private boolean enableMetrics;
+
+  public GcMetricsFactory(AccumuloConfiguration conf) {
+    requireNonNull(conf, "AccumuloConfiguration must not be null");
+    enableMetrics = conf.getBoolean(Property.GC_METRICS_ENABLED);
+  }
+
+  public boolean register(SimpleGarbageCollector gc) {
+
+    if (!enableMetrics) {
+      log.info("Accumulo gc metrics are disabled.  To enable, set {} in configuration",
+          Property.GC_METRICS_ENABLED);
+      return false;
+    }
+
+    try {
+
+      MetricsSystem metricsSystem = gc.getMetricsSystem();
+
+      new GcMetrics(gc).register(metricsSystem);
+
+      return true;
+
+    } catch (Exception ex) {
+      log.error("Failed to register accumulo gc metrics", ex);
+      return false;
+    }
+  }
+}
diff --git a/server/gc/src/test/java/org/apache/accumulo/gc/SimpleGarbageCollectorTest.java b/server/gc/src/test/java/org/apache/accumulo/gc/SimpleGarbageCollectorTest.java
index 8bf950e..a9f32b6 100644
--- a/server/gc/src/test/java/org/apache/accumulo/gc/SimpleGarbageCollectorTest.java
+++ b/server/gc/src/test/java/org/apache/accumulo/gc/SimpleGarbageCollectorTest.java
@@ -49,7 +49,7 @@
   private Credentials credentials;
   private SimpleGarbageCollector gc;
   private ConfigurationCopy systemConfig;
-  private static SiteConfiguration siteConfig = new SiteConfiguration();
+  private static SiteConfiguration siteConfig = SiteConfiguration.auto();
 
   @Before
   public void setUp() {
@@ -94,6 +94,7 @@
     assertTrue(gc.isUsingTrash());
     assertEquals(1000L, gc.getStartDelay());
     assertEquals(2, gc.getNumDeleteThreads());
+    assertFalse(gc.inSafeMode()); // false by default
   }
 
   @Test
diff --git a/server/master/pom.xml b/server/master/pom.xml
index e1942c2..734f6bb 100644
--- a/server/master/pom.xml
+++ b/server/master/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-master</artifactId>
diff --git a/server/master/src/main/java/org/apache/accumulo/master/Master.java b/server/master/src/main/java/org/apache/accumulo/master/Master.java
index d91bb09..9fb03a2 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/Master.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/Master.java
@@ -114,7 +114,6 @@
 import org.apache.accumulo.server.master.state.TabletMigration;
 import org.apache.accumulo.server.master.state.TabletServerState;
 import org.apache.accumulo.server.master.state.TabletState;
-import org.apache.accumulo.server.master.state.ZooStore;
 import org.apache.accumulo.server.master.state.ZooTabletStateStore;
 import org.apache.accumulo.server.replication.ZooKeeperInitialization;
 import org.apache.accumulo.server.rpc.HighlyAvailableServiceWrapper;
@@ -1072,14 +1071,14 @@
           }
         });
 
-    watchers.add(new TabletGroupWatcher(this, new ZooTabletStateStore(new ZooStore(context)),
-        watchers.get(1)) {
-      @Override
-      boolean canSuspendTablets() {
-        // Never allow root tablet to enter suspended state.
-        return false;
-      }
-    });
+    watchers.add(
+        new TabletGroupWatcher(this, new ZooTabletStateStore(context.getAmple()), watchers.get(1)) {
+          @Override
+          boolean canSuspendTablets() {
+            // Never allow root tablet to enter suspended state.
+            return false;
+          }
+        });
     for (TabletGroupWatcher watcher : watchers) {
       watcher.start();
     }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/MasterClientServiceHandler.java b/server/master/src/main/java/org/apache/accumulo/master/MasterClientServiceHandler.java
index 49b1303..20c47b7 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/MasterClientServiceHandler.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/MasterClientServiceHandler.java
@@ -16,6 +16,10 @@
  */
 package org.apache.accumulo.master;
 
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.FLUSH_ID;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LOCATION;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LOGS;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.PREV_ROW;
 import static org.apache.accumulo.fate.util.UtilWaitThread.sleepUninterruptibly;
 
 import java.nio.ByteBuffer;
@@ -180,8 +184,8 @@
       serversToFlush.clear();
 
       try (TabletsMetadata tablets =
-          TabletsMetadata.builder().forTable(tableId).overlapping(startRow, endRow).fetchFlushId()
-              .fetchLocation().fetchLogs().fetchPrev().build(master.getContext())) {
+          TabletsMetadata.builder().forTable(tableId).overlapping(startRow, endRow)
+              .fetch(FLUSH_ID, LOCATION, LOGS, PREV_ROW).build(master.getContext())) {
         int tabletsToWaitFor = 0;
         int tabletCount = 0;
 
diff --git a/server/master/src/main/java/org/apache/accumulo/master/MasterTime.java b/server/master/src/main/java/org/apache/accumulo/master/MasterTime.java
index 73348e0..fcd910b 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/MasterTime.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/MasterTime.java
@@ -24,6 +24,7 @@
 import java.io.IOException;
 import java.util.Timer;
 import java.util.TimerTask;
+import java.util.concurrent.atomic.AtomicLong;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.fate.zookeeper.ZooReaderWriter;
@@ -46,7 +47,7 @@
    * Difference between time stored in ZooKeeper and System.nanoTime() when we last read from
    * ZooKeeper.
    */
-  private long skewAmount;
+  private final AtomicLong skewAmount;
 
   public MasterTime(Master master) throws IOException {
     this.zPath = master.getZooKeeperRoot() + Constants.ZMASTER_TICK;
@@ -55,7 +56,8 @@
 
     try {
       zk.putPersistentData(zPath, "0".getBytes(UTF_8), NodeExistsPolicy.SKIP);
-      skewAmount = Long.parseLong(new String(zk.getData(zPath, null), UTF_8)) - System.nanoTime();
+      skewAmount = new AtomicLong(
+          Long.parseLong(new String(zk.getData(zPath, null), UTF_8)) - System.nanoTime());
     } catch (Exception ex) {
       throw new IOException("Error updating master time", ex);
     }
@@ -69,8 +71,8 @@
    *
    * @return Approximate total duration this cluster has had a Master, in milliseconds.
    */
-  public synchronized long getTime() {
-    return MILLISECONDS.convert(System.nanoTime() + skewAmount, NANOSECONDS);
+  public long getTime() {
+    return MILLISECONDS.convert(System.nanoTime() + skewAmount.get(), NANOSECONDS);
   }
 
   /** Shut down the time keeping. */
@@ -88,9 +90,7 @@
       case STOP:
         try {
           long zkTime = Long.parseLong(new String(zk.getData(zPath, null), UTF_8));
-          synchronized (this) {
-            skewAmount = zkTime - System.nanoTime();
-          }
+          skewAmount.set(zkTime - System.nanoTime());
         } catch (Exception ex) {
           if (log.isDebugEnabled()) {
             log.debug("Failed to retrieve master tick time", ex);
@@ -104,7 +104,8 @@
       case UNLOAD_METADATA_TABLETS:
       case UNLOAD_ROOT_TABLET:
         try {
-          zk.putPersistentData(zPath, Long.toString(System.nanoTime() + skewAmount).getBytes(UTF_8),
+          zk.putPersistentData(zPath,
+              Long.toString(System.nanoTime() + skewAmount.get()).getBytes(UTF_8),
               NodeExistsPolicy.OVERWRITE);
         } catch (Exception ex) {
           if (log.isDebugEnabled()) {
diff --git a/server/master/src/main/java/org/apache/accumulo/master/TabletGroupWatcher.java b/server/master/src/main/java/org/apache/accumulo/master/TabletGroupWatcher.java
index d32832d..217573a 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/TabletGroupWatcher.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/TabletGroupWatcher.java
@@ -61,6 +61,7 @@
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.CurrentLocationColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.DataFileColumnFamily;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.FutureLocationColumnFamily;
+import org.apache.accumulo.core.metadata.schema.MetadataTime;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.tabletserver.thrift.NotServingTabletException;
 import org.apache.accumulo.core.util.Daemon;
@@ -87,6 +88,7 @@
 import org.apache.accumulo.server.master.state.TabletLocationState.BadLocationStateException;
 import org.apache.accumulo.server.master.state.TabletState;
 import org.apache.accumulo.server.master.state.TabletStateStore;
+import org.apache.accumulo.server.metadata.ServerAmpleImpl;
 import org.apache.accumulo.server.tablets.TabletTime;
 import org.apache.accumulo.server.util.MetadataTableUtil;
 import org.apache.hadoop.fs.Path;
@@ -586,7 +588,7 @@
     KeyExtent extent = info.getExtent();
     String targetSystemTable = extent.isMeta() ? RootTable.NAME : MetadataTable.NAME;
     Master.log.debug("Deleting tablets for {}", extent);
-    char timeType = '\0';
+    MetadataTime metadataTime = null;
     KeyExtent followingTablet = null;
     if (extent.getEndRow() != null) {
       Key nextExtent = new Key(extent.getEndRow()).followingKey(PartialKey.ROW);
@@ -619,7 +621,7 @@
             datafiles.clear();
           }
         } else if (TabletsSection.ServerColumnFamily.TIME_COLUMN.hasColumns(key)) {
-          timeType = entry.getValue().toString().charAt(0);
+          metadataTime = MetadataTime.parse(entry.getValue().toString());
         } else if (key.compareColumnFamily(TabletsSection.CurrentLocationColumnFamily.NAME) == 0) {
           throw new IllegalStateException(
               "Tablet " + key.getRow() + " is assigned during a merge!");
@@ -673,7 +675,7 @@
             + Path.SEPARATOR + extent.getTableId() + Constants.DEFAULT_TABLET_LOCATION;
         MetadataTableUtil.addTablet(
             new KeyExtent(extent.getTableId(), null, extent.getPrevEndRow()), tdir,
-            master.getContext(), timeType, this.master.masterLock);
+            master.getContext(), metadataTime.getType(), this.master.masterLock);
       }
     } catch (RuntimeException | TableNotFoundException ex) {
       throw new AccumuloException(ex);
@@ -710,7 +712,7 @@
       TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.fetch(scanner);
       scanner.fetchColumnFamily(DataFileColumnFamily.NAME);
       Mutation m = new Mutation(stopRow);
-      String maxLogicalTime = null;
+      MetadataTime maxLogicalTime = null;
       for (Entry<Key,Value> entry : scanner) {
         Key key = entry.getKey();
         Value value = entry.getValue();
@@ -722,9 +724,10 @@
           Master.log.debug("prevRow entry for lowest tablet is {}", value);
           firstPrevRowValue = new Value(value);
         } else if (TabletsSection.ServerColumnFamily.TIME_COLUMN.hasColumns(key)) {
-          maxLogicalTime = TabletTime.maxMetadataTime(maxLogicalTime, value.toString());
+          maxLogicalTime =
+              TabletTime.maxMetadataTime(maxLogicalTime, MetadataTime.parse(value.toString()));
         } else if (TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.hasColumns(key)) {
-          bw.addMutation(MetadataTableUtil.createDeleteMutation(master.getContext(),
+          bw.addMutation(ServerAmpleImpl.createDeleteMutation(master.getContext(),
               range.getTableId(), entry.getValue().toString()));
         }
       }
@@ -736,12 +739,13 @@
       TabletsSection.ServerColumnFamily.TIME_COLUMN.fetch(scanner);
       for (Entry<Key,Value> entry : scanner) {
         if (TabletsSection.ServerColumnFamily.TIME_COLUMN.hasColumns(entry.getKey())) {
-          maxLogicalTime = TabletTime.maxMetadataTime(maxLogicalTime, entry.getValue().toString());
+          maxLogicalTime = TabletTime.maxMetadataTime(maxLogicalTime,
+              MetadataTime.parse(entry.getValue().toString()));
         }
       }
 
       if (maxLogicalTime != null)
-        TabletsSection.ServerColumnFamily.TIME_COLUMN.put(m, new Value(maxLogicalTime.getBytes()));
+        TabletsSection.ServerColumnFamily.TIME_COLUMN.put(m, new Value(maxLogicalTime.encode()));
 
       if (!m.getUpdates().isEmpty()) {
         bw.addMutation(m);
diff --git a/server/master/src/main/java/org/apache/accumulo/master/replication/WorkDriver.java b/server/master/src/main/java/org/apache/accumulo/master/replication/WorkDriver.java
index c17f8ca..e687ea2 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/replication/WorkDriver.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/replication/WorkDriver.java
@@ -59,8 +59,8 @@
       try {
         Class<?> clz = Class.forName(workAssignerClass);
         Class<? extends WorkAssigner> workAssignerClz = clz.asSubclass(WorkAssigner.class);
-        this.assigner = workAssignerClz.newInstance();
-      } catch (ClassNotFoundException | InstantiationException | IllegalAccessException e) {
+        this.assigner = workAssignerClz.getDeclaredConstructor().newInstance();
+      } catch (ReflectiveOperationException e) {
         log.error("Could not instantiate configured work assigner {}", workAssignerClass, e);
         throw new RuntimeException(e);
       }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/state/SetGoalState.java b/server/master/src/main/java/org/apache/accumulo/master/state/SetGoalState.java
index ca764fa..ecbf85f 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/state/SetGoalState.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/state/SetGoalState.java
@@ -38,7 +38,7 @@
       System.exit(-1);
     }
 
-    ServerContext context = new ServerContext(new SiteConfiguration());
+    var context = new ServerContext(SiteConfiguration.auto());
     SecurityUtil.serverLogin(context.getConfiguration());
     ServerUtil.waitForZookeeperAndHdfs(context);
     context.getZooReaderWriter().putPersistentData(
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/TableInfo.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/TableInfo.java
index c108dd4..cf11383 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/TableInfo.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/TableInfo.java
@@ -20,6 +20,7 @@
 import java.util.Map;
 
 import org.apache.accumulo.core.client.admin.InitialTableState;
+import org.apache.accumulo.core.client.admin.TimeType;
 import org.apache.accumulo.core.data.NamespaceId;
 import org.apache.accumulo.core.data.TableId;
 
@@ -31,7 +32,7 @@
   private TableId tableId;
   private NamespaceId namespaceId;
 
-  private char timeType;
+  private TimeType timeType;
   private String user;
 
   // Record requested initial state at creation
@@ -69,11 +70,11 @@
     this.namespaceId = namespaceId;
   }
 
-  public char getTimeType() {
+  public TimeType getTimeType() {
     return timeType;
   }
 
-  public void setTimeType(char timeType) {
+  public void setTimeType(TimeType timeType) {
     this.timeType = timeType;
   }
 
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer1/CopyFailed.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer1/CopyFailed.java
index 39f510c..fe36533 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer1/CopyFailed.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer1/CopyFailed.java
@@ -36,6 +36,7 @@
 import org.apache.accumulo.core.master.thrift.BulkImportState;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.BulkFileColumnFamily;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.fate.FateTxId;
 import org.apache.accumulo.fate.Repo;
@@ -45,7 +46,6 @@
 import org.apache.accumulo.server.fs.VolumeManager;
 import org.apache.accumulo.server.master.LiveTServerSet.TServerConnection;
 import org.apache.accumulo.server.master.state.TServerInstance;
-import org.apache.accumulo.server.util.MetadataTableUtil;
 import org.apache.accumulo.server.zookeeper.DistributedWorkQueue;
 import org.apache.hadoop.fs.Path;
 import org.apache.thrift.TException;
@@ -123,7 +123,7 @@
       mscanner.fetchColumnFamily(TabletsSection.BulkFileColumnFamily.NAME);
 
       for (Entry<Key,Value> entry : mscanner) {
-        if (MetadataTableUtil.getBulkLoadTid(entry.getValue()) == tid) {
+        if (BulkFileColumnFamily.getBulkLoadTid(entry.getValue()) == tid) {
           FileRef loadedFile = new FileRef(fs, entry.getKey());
           String absPath = failures.remove(loadedFile);
           if (absPath != null) {
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer2/LoadFiles.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer2/LoadFiles.java
index 98d59ed..1798458 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer2/LoadFiles.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer2/LoadFiles.java
@@ -17,6 +17,9 @@
 package org.apache.accumulo.master.tableOps.bulkVer2;
 
 import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LOADED;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LOCATION;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.PREV_ROW;
 
 import java.util.ArrayList;
 import java.util.Comparator;
@@ -208,7 +211,7 @@
           server = location.getHostAndPort();
         }
 
-        Set<String> loadedFiles = tablet.getLoaded();
+        Set<String> loadedFiles = tablet.getLoaded().keySet();
 
         Map<String,MapFileInfo> thriftImports = new HashMap<>();
 
@@ -315,7 +318,7 @@
 
     Iterator<TabletMetadata> tabletIter =
         TabletsMetadata.builder().forTable(tableId).overlapping(startRow, null).checkConsistency()
-            .fetchPrev().fetchLocation().fetchLoaded().build(master.getContext()).iterator();
+            .fetch(PREV_ROW, LOCATION, LOADED).build(master.getContext()).iterator();
 
     Loader loader;
     if (bulkInfo.tableState == TableState.ONLINE) {
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer2/PrepBulkImport.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer2/PrepBulkImport.java
index 2199e99..58dd235 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer2/PrepBulkImport.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer2/PrepBulkImport.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.master.tableOps.bulkVer2;
 
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.PREV_ROW;
 import static org.apache.accumulo.fate.util.UtilWaitThread.sleepUninterruptibly;
 
 import java.io.IOException;
@@ -160,7 +161,7 @@
         BulkSerialize.readLoadMapping(bulkDir.toString(), bulkInfo.tableId, p -> fs.open(p))) {
 
       TabletIterFactory tabletIterFactory = startRow -> TabletsMetadata.builder()
-          .forTable(bulkInfo.tableId).overlapping(startRow, null).checkConsistency().fetchPrev()
+          .forTable(bulkInfo.tableId).overlapping(startRow, null).checkConsistency().fetch(PREV_ROW)
           .build(master.getContext()).stream().map(TabletMetadata::getExtent).iterator();
 
       checkForMerge(bulkInfo.tableId.canonical(), Iterators.transform(lmi, entry -> entry.getKey()),
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/compact/CompactionDriver.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/compact/CompactionDriver.java
index 2f740db..d85e649 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/compact/CompactionDriver.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/compact/CompactionDriver.java
@@ -16,6 +16,10 @@
  */
 package org.apache.accumulo.master.tableOps.compact;
 
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.COMPACT_ID;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LOCATION;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.PREV_ROW;
+
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.clientImpl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.clientImpl.Tables;
@@ -83,8 +87,8 @@
     int tabletCount = 0;
 
     TabletsMetadata tablets =
-        TabletsMetadata.builder().forTable(tableId).overlapping(startRow, endRow).fetchLocation()
-            .fetchPrev().fetchCompactId().build(master.getContext());
+        TabletsMetadata.builder().forTable(tableId).overlapping(startRow, endRow)
+            .fetch(LOCATION, PREV_ROW, COMPACT_ID).build(master.getContext());
 
     for (TabletMetadata tablet : tablets) {
       if (tablet.getCompactId().orElse(-1) < compactId) {
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/create/CreateTable.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/create/CreateTable.java
index 5d2327f..a3e9ab0 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/create/CreateTable.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/create/CreateTable.java
@@ -28,7 +28,6 @@
 import org.apache.accumulo.master.tableOps.MasterRepo;
 import org.apache.accumulo.master.tableOps.TableInfo;
 import org.apache.accumulo.master.tableOps.Utils;
-import org.apache.accumulo.server.tablets.TabletTime;
 
 public class CreateTable extends MasterRepo {
   private static final long serialVersionUID = 1L;
@@ -40,7 +39,7 @@
       NamespaceId namespaceId) {
     tableInfo = new TableInfo();
     tableInfo.setTableName(tableName);
-    tableInfo.setTimeType(TabletTime.getTimeID(timeType));
+    tableInfo.setTimeType(timeType);
     tableInfo.setUser(user);
     tableInfo.props = props;
     tableInfo.setNamespaceId(namespaceId);
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/create/PopulateMetadata.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/create/PopulateMetadata.java
index 70ce1ed..a5ff669 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/create/PopulateMetadata.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/create/PopulateMetadata.java
@@ -24,11 +24,13 @@
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.MutationsRejectedException;
+import org.apache.accumulo.core.client.admin.TimeType;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.TableId;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.dataImpl.KeyExtent;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
+import org.apache.accumulo.core.metadata.schema.MetadataTime;
 import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.fate.zookeeper.ZooLock;
 import org.apache.accumulo.master.Master;
@@ -78,7 +80,7 @@
   }
 
   private void writeSplitsToMetadataTable(ServerContext ctx, TableId tableId,
-      SortedSet<Text> splits, Map<Text,Text> data, char timeType, ZooLock lock, BatchWriter bw)
+      SortedSet<Text> splits, Map<Text,Text> data, TimeType timeType, ZooLock lock, BatchWriter bw)
       throws MutationsRejectedException {
     Text prevSplit = null;
     Value dirValue;
@@ -88,7 +90,7 @@
           (split == null) ? new Value(tableInfo.defaultTabletDir) : new Value(data.get(split));
       MetadataSchema.TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(mut, dirValue);
       MetadataSchema.TabletsSection.ServerColumnFamily.TIME_COLUMN.put(mut,
-          new Value(timeType + "0"));
+          new Value(new MetadataTime(0, timeType).encode()));
       MetadataTableUtil.putLockID(ctx, lock, mut);
       prevSplit = split;
       bw.addMutation(mut);
diff --git a/server/master/src/main/java/org/apache/accumulo/master/upgrade/UpgradeCoordinator.java b/server/master/src/main/java/org/apache/accumulo/master/upgrade/UpgradeCoordinator.java
index 4e901b7..38d967d 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/upgrade/UpgradeCoordinator.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/upgrade/UpgradeCoordinator.java
@@ -28,8 +28,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.collect.ImmutableMap;
-
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 
 public class UpgradeCoordinator {
@@ -40,8 +38,8 @@
   private boolean haveUpgradedZooKeeper = false;
   private boolean startedMetadataUpgrade = false;
   private int currentVersion;
-  private Map<Integer,Upgrader> upgraders =
-      ImmutableMap.of(ServerConstants.SHORTEN_RFILE_KEYS, new Upgrader8to9());
+  private Map<Integer,Upgrader> upgraders = Map.of(ServerConstants.SHORTEN_RFILE_KEYS,
+      new Upgrader8to9(), ServerConstants.CRYPTO_CHANGES, new Upgrader9to10());
 
   public UpgradeCoordinator(ServerContext ctx) {
     int currentVersion = ServerUtil.getAccumuloPersistentVersion(ctx.getVolumeManager());
@@ -60,8 +58,9 @@
   }
 
   public synchronized void upgradeZookeeper() {
-    if (haveUpgradedZooKeeper)
+    if (haveUpgradedZooKeeper) {
       throw new IllegalStateException("Only expect this method to be called once");
+    }
 
     try {
       if (currentVersion < ServerConstants.DATA_VERSION) {
@@ -80,8 +79,9 @@
   }
 
   public synchronized Future<Void> upgradeMetadata() {
-    if (startedMetadataUpgrade)
+    if (startedMetadataUpgrade) {
       throw new IllegalStateException("Only expect this method to be called once");
+    }
 
     if (!haveUpgradedZooKeeper) {
       throw new IllegalStateException("We should only attempt to upgrade"
diff --git a/server/master/src/main/java/org/apache/accumulo/master/upgrade/Upgrader9to10.java b/server/master/src/main/java/org/apache/accumulo/master/upgrade/Upgrader9to10.java
new file mode 100644
index 0000000..c349f8d
--- /dev/null
+++ b/server/master/src/main/java/org/apache/accumulo/master/upgrade/Upgrader9to10.java
@@ -0,0 +1,456 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.master.upgrade;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.apache.accumulo.core.metadata.RootTable.ZROOT_TABLET;
+import static org.apache.accumulo.core.metadata.RootTable.ZROOT_TABLET_GC_CANDIDATES;
+import static org.apache.accumulo.fate.util.UtilWaitThread.sleepUninterruptibly;
+import static org.apache.accumulo.server.util.MetadataTableUtil.EMPTY_TEXT;
+
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.StreamSupport;
+
+import org.apache.accumulo.core.Constants;
+import org.apache.accumulo.core.client.AccumuloClient;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.MutationsRejectedException;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.client.admin.TimeType;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.file.FileOperations;
+import org.apache.accumulo.core.file.FileSKVIterator;
+import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.Ample;
+import org.apache.accumulo.core.metadata.schema.DataFileValue;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema;
+import org.apache.accumulo.core.metadata.schema.MetadataTime;
+import org.apache.accumulo.core.metadata.schema.RootTabletMetadata;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.LocationType;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.tabletserver.log.LogEntry;
+import org.apache.accumulo.core.util.HostAndPort;
+import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
+import org.apache.accumulo.fate.zookeeper.ZooUtil;
+import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
+import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeMissingPolicy;
+import org.apache.accumulo.server.ServerContext;
+import org.apache.accumulo.server.fs.FileRef;
+import org.apache.accumulo.server.fs.VolumeManager;
+import org.apache.accumulo.server.master.state.TServerInstance;
+import org.apache.accumulo.server.metadata.RootGcCandidates;
+import org.apache.accumulo.server.metadata.ServerAmpleImpl;
+import org.apache.accumulo.server.metadata.TabletMutatorBase;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Preconditions;
+
+/**
+ * Handles upgrading from 2.0 to 2.1
+ */
+public class Upgrader9to10 implements Upgrader {
+
+  private static final Logger log = LoggerFactory.getLogger(Upgrader9to10.class);
+
+  public static final String ZROOT_TABLET_LOCATION = ZROOT_TABLET + "/location";
+  public static final String ZROOT_TABLET_FUTURE_LOCATION = ZROOT_TABLET + "/future_location";
+  public static final String ZROOT_TABLET_LAST_LOCATION = ZROOT_TABLET + "/lastlocation";
+  public static final String ZROOT_TABLET_WALOGS = ZROOT_TABLET + "/walogs";
+  public static final String ZROOT_TABLET_CURRENT_LOGS = ZROOT_TABLET + "/current_logs";
+  public static final String ZROOT_TABLET_PATH = ZROOT_TABLET + "/dir";
+  public static final Value UPGRADED = MetadataSchema.DeletesSection.SkewedKeyValue.NAME;
+  public static final String OLD_DELETE_PREFIX = "~del";
+
+  /**
+   * This percentage was taken from the SimpleGarbageCollector and if nothing else is going on
+   * during upgrade then it could be larger.
+   */
+  static final float CANDIDATE_MEMORY_PERCENTAGE = 0.50f;
+
+  @Override
+  public void upgradeZookeeper(ServerContext ctx) {
+    upgradeRootTabletMetadata(ctx);
+  }
+
+  @Override
+  public void upgradeMetadata(ServerContext ctx) {
+    upgradeFileDeletes(ctx, Ample.DataLevel.METADATA);
+    upgradeFileDeletes(ctx, Ample.DataLevel.USER);
+
+  }
+
+  private void upgradeRootTabletMetadata(ServerContext ctx) {
+    String rootMetaSer = getFromZK(ctx, ZROOT_TABLET);
+
+    if (rootMetaSer.isEmpty()) {
+      String dir = getFromZK(ctx, ZROOT_TABLET_PATH);
+      List<LogEntry> logs = getRootLogEntries(ctx);
+
+      TServerInstance last = getLocation(ctx, ZROOT_TABLET_LAST_LOCATION);
+      TServerInstance future = getLocation(ctx, ZROOT_TABLET_FUTURE_LOCATION);
+      TServerInstance current = getLocation(ctx, ZROOT_TABLET_LOCATION);
+
+      UpgradeMutator tabletMutator = new UpgradeMutator(ctx);
+
+      tabletMutator.putPrevEndRow(RootTable.EXTENT.getPrevEndRow());
+
+      tabletMutator.putDir(dir);
+
+      if (last != null)
+        tabletMutator.putLocation(last, LocationType.LAST);
+
+      if (future != null)
+        tabletMutator.putLocation(future, LocationType.FUTURE);
+
+      if (current != null)
+        tabletMutator.putLocation(current, LocationType.CURRENT);
+
+      logs.forEach(tabletMutator::putWal);
+
+      Map<String,DataFileValue> files = cleanupRootTabletFiles(ctx.getVolumeManager(), dir);
+      files.forEach((path, dfv) -> tabletMutator.putFile(new FileRef(path), dfv));
+
+      tabletMutator.putTime(computeRootTabletTime(ctx, files.keySet()));
+
+      tabletMutator.mutate();
+    }
+
+    try {
+      ctx.getZooReaderWriter().putPersistentData(
+          ctx.getZooKeeperRoot() + ZROOT_TABLET_GC_CANDIDATES,
+          new RootGcCandidates().toJson().getBytes(UTF_8), NodeExistsPolicy.SKIP);
+    } catch (KeeperException | InterruptedException e) {
+      throw new RuntimeException(e);
+    }
+
+    // this operation must be idempotent, so deleting after updating is very important
+
+    delete(ctx, ZROOT_TABLET_CURRENT_LOGS);
+    delete(ctx, ZROOT_TABLET_FUTURE_LOCATION);
+    delete(ctx, ZROOT_TABLET_LAST_LOCATION);
+    delete(ctx, ZROOT_TABLET_LOCATION);
+    delete(ctx, ZROOT_TABLET_WALOGS);
+    delete(ctx, ZROOT_TABLET_PATH);
+  }
+
+  private static class UpgradeMutator extends TabletMutatorBase {
+
+    private ServerContext context;
+
+    UpgradeMutator(ServerContext context) {
+      super(context, RootTable.EXTENT);
+      this.context = context;
+    }
+
+    @Override
+    public void mutate() {
+      Mutation mutation = getMutation();
+
+      try {
+        context.getZooReaderWriter().mutate(context.getZooKeeperRoot() + RootTable.ZROOT_TABLET,
+            new byte[0], ZooUtil.PUBLIC, currVal -> {
+
+              // Earlier, it was checked that root tablet metadata did not exists. However the
+              // earlier check does handle race conditions. Race conditions are unexpected. This is
+              // a sanity check when making the update in ZK using compare and set. If this fails
+              // and its not a bug, then its likely some concurrency issue. For example two masters
+              // concurrently running upgrade could cause this to fail.
+              Preconditions.checkState(currVal.length == 0,
+                  "Expected root tablet metadata to be empty!");
+
+              RootTabletMetadata rtm = new RootTabletMetadata();
+
+              rtm.update(mutation);
+
+              String json = rtm.toJson();
+
+              log.info("Upgrading root tablet metadata, writing following to ZK : \n {}", json);
+
+              return json.getBytes(UTF_8);
+            });
+      } catch (Exception e) {
+        throw new RuntimeException(e);
+      }
+
+    }
+
+  }
+
+  protected TServerInstance getLocation(ServerContext ctx, String relpath) {
+    String str = getFromZK(ctx, relpath);
+    if (str == null) {
+      return null;
+    }
+
+    String[] parts = str.split("[|]", 2);
+    HostAndPort address = HostAndPort.fromString(parts[0]);
+    if (parts.length > 1 && parts[1] != null && parts[1].length() > 0) {
+      return new TServerInstance(address, parts[1]);
+    } else {
+      // a 1.2 location specification: DO NOT WANT
+      return null;
+    }
+  }
+
+  static List<LogEntry> getRootLogEntries(ServerContext context) {
+
+    try {
+      ArrayList<LogEntry> result = new ArrayList<>();
+
+      IZooReaderWriter zoo = context.getZooReaderWriter();
+      String root = context.getZooKeeperRoot() + ZROOT_TABLET_WALOGS;
+      // there's a little race between getting the children and fetching
+      // the data. The log can be removed in between.
+      outer: while (true) {
+        result.clear();
+        for (String child : zoo.getChildren(root)) {
+          try {
+            LogEntry e = LogEntry.fromBytes(zoo.getData(root + "/" + child, null));
+            // upgrade from !0;!0<< -> +r<<
+            e = new LogEntry(RootTable.EXTENT, 0, e.server, e.filename);
+            result.add(e);
+          } catch (KeeperException.NoNodeException ex) {
+            // TODO I think this is a bug, probably meant to continue to while loop... was probably
+            // a bug in the original code.
+            continue outer;
+          }
+        }
+        break;
+      }
+
+      return result;
+    } catch (KeeperException | InterruptedException | IOException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  private String getFromZK(ServerContext ctx, String relpath) {
+    try {
+      byte[] data = ctx.getZooReaderWriter().getData(ctx.getZooKeeperRoot() + relpath, null);
+      if (data == null)
+        return null;
+
+      return new String(data, StandardCharsets.UTF_8);
+    } catch (KeeperException | InterruptedException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  private void delete(ServerContext ctx, String relpath) {
+    try {
+      ctx.getZooReaderWriter().recursiveDelete(ctx.getZooKeeperRoot() + relpath,
+          NodeMissingPolicy.SKIP);
+    } catch (KeeperException | InterruptedException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  MetadataTime computeRootTabletTime(ServerContext context, Collection<String> goodPaths) {
+
+    try {
+      long rtime = Long.MIN_VALUE;
+      for (String good : goodPaths) {
+        Path path = new Path(good);
+
+        FileSystem ns = context.getVolumeManager().getVolumeByPath(path).getFileSystem();
+        long maxTime = -1;
+        try (FileSKVIterator reader = FileOperations.getInstance().newReaderBuilder()
+            .forFile(path.toString(), ns, ns.getConf(), context.getCryptoService())
+            .withTableConfiguration(
+                context.getServerConfFactory().getTableConfiguration(RootTable.ID))
+            .seekToBeginning().build()) {
+          while (reader.hasTop()) {
+            maxTime = Math.max(maxTime, reader.getTopKey().getTimestamp());
+            reader.next();
+          }
+        }
+        if (maxTime > rtime) {
+
+          rtime = maxTime;
+        }
+      }
+
+      if (rtime < 0) {
+        throw new IllegalStateException("Unexpected root tablet logical time " + rtime);
+      }
+
+      return new MetadataTime(rtime, TimeType.LOGICAL);
+    } catch (IOException e) {
+      throw new UncheckedIOException(e);
+    }
+  }
+
+  static Map<String,DataFileValue> cleanupRootTabletFiles(VolumeManager fs, String dir) {
+
+    try {
+      FileStatus[] files = fs.listStatus(new Path(dir));
+
+      Map<String,DataFileValue> goodFiles = new HashMap<>(files.length);
+
+      for (FileStatus file : files) {
+
+        String path = file.getPath().toString();
+        if (file.getPath().toUri().getScheme() == null) {
+          // depending on the behavior of HDFS, if list status does not return fully qualified
+          // volumes
+          // then could switch to the default volume
+          throw new IllegalArgumentException("Require fully qualified paths " + file.getPath());
+        }
+
+        String filename = file.getPath().getName();
+
+        // check for incomplete major compaction, this should only occur
+        // for root tablet
+        if (filename.startsWith("delete+")) {
+          String expectedCompactedFile =
+              path.substring(0, path.lastIndexOf("/delete+")) + "/" + filename.split("\\+")[1];
+          if (fs.exists(new Path(expectedCompactedFile))) {
+            // compaction finished, but did not finish deleting compacted files.. so delete it
+            if (!fs.deleteRecursively(file.getPath()))
+              log.warn("Delete of file: {} return false", file.getPath());
+            continue;
+          }
+          // compaction did not finish, so put files back
+
+          // reset path and filename for rest of loop
+          filename = filename.split("\\+", 3)[2];
+          path = path.substring(0, path.lastIndexOf("/delete+")) + "/" + filename;
+          Path src = file.getPath();
+          Path dst = new Path(path);
+
+          if (!fs.rename(src, dst)) {
+            throw new IOException("Rename " + src + " to " + dst + " returned false ");
+          }
+        }
+
+        if (filename.endsWith("_tmp")) {
+          log.warn("cleaning up old tmp file: {}", path);
+          if (!fs.deleteRecursively(file.getPath()))
+            log.warn("Delete of tmp file: {} return false", file.getPath());
+
+          continue;
+        }
+
+        if (!filename.startsWith(Constants.MAPFILE_EXTENSION + "_")
+            && !FileOperations.getValidExtensions().contains(filename.split("\\.")[1])) {
+          log.error("unknown file in tablet: {}", path);
+          continue;
+        }
+
+        goodFiles.put(path, new DataFileValue(file.getLen(), 0));
+      }
+
+      return goodFiles;
+    } catch (IOException e) {
+      throw new UncheckedIOException(e);
+    }
+  }
+
+  public void upgradeFileDeletes(ServerContext ctx, Ample.DataLevel level) {
+
+    String tableName = level.metaTable();
+    AccumuloClient c = ctx;
+
+    // find all deletes
+    try (BatchWriter writer = c.createBatchWriter(tableName, new BatchWriterConfig())) {
+      log.info("looking for candidates in table {}", tableName);
+      Iterator<String> oldCandidates = getOldCandidates(ctx, tableName);
+      int t = 0; // no waiting first time through
+      while (oldCandidates.hasNext()) {
+        // give it some time for memory to clean itself up if needed
+        sleepUninterruptibly(t, TimeUnit.SECONDS);
+        List<String> deletes = readCandidatesThatFitInMemory(oldCandidates);
+        log.info("found {} deletes to upgrade", deletes.size());
+        for (String olddelete : deletes) {
+          // create new formatted delete
+          log.trace("upgrading delete entry for {}", olddelete);
+          writer.addMutation(ServerAmpleImpl.createDeleteMutation(ctx, level.tableId(), olddelete));
+        }
+        writer.flush();
+        // if nothing thrown then we're good so mark all deleted
+        log.info("upgrade processing completed so delete old entries");
+        for (String olddelete : deletes) {
+          log.trace("deleting old entry for {}", olddelete);
+          writer.addMutation(deleteOldDeleteMutation(olddelete));
+        }
+        writer.flush();
+        t = 3;
+      }
+    } catch (TableNotFoundException | MutationsRejectedException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  private Iterator<String> getOldCandidates(ServerContext ctx, String tableName)
+      throws TableNotFoundException {
+    Range range = MetadataSchema.DeletesSection.getRange();
+    Scanner scanner = ctx.createScanner(tableName, Authorizations.EMPTY);
+    scanner.setRange(range);
+    return StreamSupport.stream(scanner.spliterator(), false)
+        .filter(entry -> !entry.getValue().equals(UPGRADED))
+        .map(entry -> entry.getKey().getRow().toString().substring(OLD_DELETE_PREFIX.length()))
+        .iterator();
+  }
+
+  private List<String> readCandidatesThatFitInMemory(Iterator<String> candidates) {
+    List<String> result = new ArrayList<>();
+    // Always read at least one. If memory doesn't clean up fast enough at least
+    // some progress is made.
+    while (candidates.hasNext()) {
+      result.add(candidates.next());
+      if (almostOutOfMemory(Runtime.getRuntime()))
+        break;
+    }
+    return result;
+  }
+
+  private Mutation deleteOldDeleteMutation(final String delete) {
+    Mutation m = new Mutation(OLD_DELETE_PREFIX + delete);
+    m.putDelete(EMPTY_TEXT, EMPTY_TEXT);
+    return m;
+  }
+
+  private boolean almostOutOfMemory(Runtime runtime) {
+    if (runtime.totalMemory() - runtime.freeMemory()
+        > CANDIDATE_MEMORY_PERCENTAGE * runtime.maxMemory()) {
+      log.info("List of delete candidates has exceeded the memory"
+          + " threshold. Attempting to delete what has been gathered so far.");
+      return true;
+    } else
+      return false;
+  }
+
+}
diff --git a/server/master/src/main/java/org/apache/accumulo/master/util/FateAdmin.java b/server/master/src/main/java/org/apache/accumulo/master/util/FateAdmin.java
index 21e0850..a6a5d74 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/util/FateAdmin.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/util/FateAdmin.java
@@ -77,7 +77,7 @@
 
     AdminUtil<Master> admin = new AdminUtil<>();
 
-    try (ServerContext context = new ServerContext(new SiteConfiguration())) {
+    try (var context = new ServerContext(SiteConfiguration.auto())) {
       final String zkRoot = context.getZooKeeperRoot();
       String path = zkRoot + Constants.ZFATE;
       String masterPath = zkRoot + Constants.ZMASTER_LOCK;
diff --git a/server/master/src/test/java/org/apache/accumulo/master/metrics/ReplicationMetricsTest.java b/server/master/src/test/java/org/apache/accumulo/master/metrics/ReplicationMetricsTest.java
index 1592a90..0151dc3 100644
--- a/server/master/src/test/java/org/apache/accumulo/master/metrics/ReplicationMetricsTest.java
+++ b/server/master/src/test/java/org/apache/accumulo/master/metrics/ReplicationMetricsTest.java
@@ -17,6 +17,7 @@
 package org.apache.accumulo.master.metrics;
 
 import java.lang.reflect.Field;
+import java.util.Set;
 
 import org.apache.accumulo.master.Master;
 import org.apache.accumulo.server.ServerContext;
@@ -29,8 +30,6 @@
 import org.easymock.EasyMock;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableSet;
-
 public class ReplicationMetricsTest {
   private long currentTime = 1000L;
 
@@ -62,14 +61,14 @@
 
     // First call will initialize the map of paths to modification time
     EasyMock.expect(master.getContext()).andReturn(context).anyTimes();
-    EasyMock.expect(util.getPendingReplicationPaths()).andReturn(ImmutableSet.of(path1, path2));
+    EasyMock.expect(util.getPendingReplicationPaths()).andReturn(Set.of(path1, path2));
     EasyMock.expect(master.getFileSystem()).andReturn(fileSystem);
     EasyMock.expect(fileSystem.getFileStatus(path1)).andReturn(createStatus(100));
     EasyMock.expect(master.getFileSystem()).andReturn(fileSystem);
     EasyMock.expect(fileSystem.getFileStatus(path2)).andReturn(createStatus(200));
 
     // Second call will recognize the missing path1 and add the latency stat
-    EasyMock.expect(util.getPendingReplicationPaths()).andReturn(ImmutableSet.of(path2));
+    EasyMock.expect(util.getPendingReplicationPaths()).andReturn(Set.of(path2));
 
     // Expect a call to reset the min/max
     stat.resetMinMax();
diff --git a/server/master/src/test/java/org/apache/accumulo/master/state/RootTabletStateStoreTest.java b/server/master/src/test/java/org/apache/accumulo/master/state/RootTabletStateStoreTest.java
index c455742..5743676 100644
--- a/server/master/src/test/java/org/apache/accumulo/master/state/RootTabletStateStoreTest.java
+++ b/server/master/src/test/java/org/apache/accumulo/master/state/RootTabletStateStoreTest.java
@@ -16,141 +16,71 @@
  */
 package org.apache.accumulo.master.state;
 
-import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNull;
 import static org.junit.Assert.fail;
 
-import java.util.ArrayList;
-import java.util.Arrays;
+import java.nio.charset.StandardCharsets;
 import java.util.Collections;
-import java.util.HashSet;
 import java.util.List;
 
+import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.TableId;
 import org.apache.accumulo.core.dataImpl.KeyExtent;
 import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.Ample;
+import org.apache.accumulo.core.metadata.schema.RootTabletMetadata;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType;
 import org.apache.accumulo.core.util.HostAndPort;
 import org.apache.accumulo.server.master.state.Assignment;
-import org.apache.accumulo.server.master.state.DistributedStore;
 import org.apache.accumulo.server.master.state.DistributedStoreException;
 import org.apache.accumulo.server.master.state.TServerInstance;
 import org.apache.accumulo.server.master.state.TabletLocationState;
 import org.apache.accumulo.server.master.state.TabletLocationState.BadLocationStateException;
 import org.apache.accumulo.server.master.state.ZooTabletStateStore;
+import org.apache.accumulo.server.metadata.TabletMutatorBase;
 import org.junit.Test;
 
+import com.google.common.base.Preconditions;
+
 public class RootTabletStateStoreTest {
 
-  static class Node {
-    Node(String name) {
-      this.name = name;
-    }
+  private static class TestAmple implements Ample {
 
-    List<Node> children = new ArrayList<>();
-    String name;
-    byte[] value = {};
+    private String json =
+        new String(RootTabletMetadata.getInitialJson("/some/dir", "/some/dir/0000.rf"),
+            StandardCharsets.UTF_8);
 
-    Node find(String name) {
-      for (Node node : children)
-        if (node.name.equals(name))
-          return node;
-      return null;
-    }
-  }
-
-  static class FakeZooStore implements DistributedStore {
-
-    Node root = new Node("/");
-
-    private Node recurse(Node root, String[] path, int depth) {
-      if (depth == path.length)
-        return root;
-      Node child = root.find(path[depth]);
-      if (child == null)
-        return null;
-      return recurse(child, path, depth + 1);
-    }
-
-    private Node navigate(String path) {
-      path = path.replaceAll("/$", "");
-      return recurse(root, path.split("/"), 1);
+    @Override
+    public TabletMetadata readTablet(KeyExtent extent, ColumnType... colsToFetch) {
+      Preconditions.checkArgument(extent.equals(RootTable.EXTENT));
+      return RootTabletMetadata.fromJson(json).convertToTabletMetadata();
     }
 
     @Override
-    public List<String> getChildren(String path) {
-      Node node = navigate(path);
-      if (node == null)
-        return Collections.emptyList();
-      List<String> children = new ArrayList<>(node.children.size());
-      for (Node child : node.children)
-        children.add(child.name);
-      return children;
+    public TabletMutator mutateTablet(KeyExtent extent) {
+      Preconditions.checkArgument(extent.equals(RootTable.EXTENT));
+      return new TabletMutatorBase(null, extent) {
+
+        @Override
+        public void mutate() {
+          Mutation m = getMutation();
+
+          RootTabletMetadata rtm = RootTabletMetadata.fromJson(json);
+
+          rtm.update(m);
+
+          json = rtm.toJson();
+        }
+      };
     }
 
-    @Override
-    public void put(String path, byte[] bs) {
-      create(path).value = bs;
-    }
-
-    private Node create(String path) {
-      String[] parts = path.split("/");
-      return recurseCreate(root, parts, 1);
-    }
-
-    private Node recurseCreate(Node root, String[] path, int index) {
-      if (path.length == index)
-        return root;
-      Node node = root.find(path[index]);
-      if (node == null) {
-        node = new Node(path[index]);
-        root.children.add(node);
-      }
-      return recurseCreate(node, path, index + 1);
-    }
-
-    @Override
-    public void remove(String path) {
-      String[] parts = path.split("/");
-      String[] parentPath = Arrays.copyOf(parts, parts.length - 1);
-      Node parent = recurse(root, parentPath, 1);
-      if (parent == null)
-        return;
-      Node child = parent.find(parts[parts.length - 1]);
-      if (child != null)
-        parent.children.remove(child);
-    }
-
-    @Override
-    public byte[] get(String path) {
-      Node node = navigate(path);
-      if (node != null)
-        return node.value;
-      return null;
-    }
-  }
-
-  @Test
-  public void testFakeZoo() throws DistributedStoreException {
-    DistributedStore store = new FakeZooStore();
-    store.put("/a/b/c", "abc".getBytes());
-    byte[] abc = store.get("/a/b/c");
-    assertArrayEquals(abc, "abc".getBytes());
-    byte[] empty = store.get("/a/b");
-    assertArrayEquals(empty, "".getBytes());
-    store.put("/a/b", "ab".getBytes());
-    assertArrayEquals(store.get("/a/b"), "ab".getBytes());
-    store.put("/a/b/b", "abb".getBytes());
-    List<String> children = store.getChildren("/a/b");
-    assertEquals(new HashSet<>(children), new HashSet<>(Arrays.asList("b", "c")));
-    store.remove("/a/b/c");
-    children = store.getChildren("/a/b");
-    assertEquals(new HashSet<>(children), new HashSet<>(Arrays.asList("b")));
   }
 
   @Test
   public void testRootTabletStateStore() throws DistributedStoreException {
-    ZooTabletStateStore tstore = new ZooTabletStateStore(new FakeZooStore());
+    ZooTabletStateStore tstore = new ZooTabletStateStore(new TestAmple());
     KeyExtent root = RootTable.EXTENT;
     String sessionId = "this is my unique session data";
     TServerInstance server =
@@ -212,7 +142,4 @@
       fail("should not get here");
     } catch (IllegalArgumentException ex) {}
   }
-
-  // @Test
-  // public void testMetaDataStore() { } // see functional test
 }
diff --git a/server/master/src/test/java/org/apache/accumulo/master/tableOps/bulkVer2/PrepBulkImportTest.java b/server/master/src/test/java/org/apache/accumulo/master/tableOps/bulkVer2/PrepBulkImportTest.java
index 80d1615..4130293 100644
--- a/server/master/src/test/java/org/apache/accumulo/master/tableOps/bulkVer2/PrepBulkImportTest.java
+++ b/server/master/src/test/java/org/apache/accumulo/master/tableOps/bulkVer2/PrepBulkImportTest.java
@@ -36,7 +36,6 @@
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableSet;
 import com.google.common.collect.Iterables;
 import com.google.common.collect.Sets;
 
@@ -67,7 +66,7 @@
   }
 
   Iterable<List<KeyExtent>> powerSet(KeyExtent... extents) {
-    Set<Set<KeyExtent>> powerSet = Sets.powerSet(ImmutableSet.copyOf(extents));
+    Set<Set<KeyExtent>> powerSet = Sets.powerSet(Set.copyOf(Arrays.asList(extents)));
 
     return Iterables.transform(powerSet, set -> {
       List<KeyExtent> list = new ArrayList<>(set);
@@ -139,7 +138,7 @@
       }
 
       List<String> requiredRows = Arrays.asList("b", "m", "r", "v");
-      for (Set<String> otherRows : Sets.powerSet(ImmutableSet.of("a", "c", "q", "t", "x"))) {
+      for (Set<String> otherRows : Sets.powerSet(Set.of("a", "c", "q", "t", "x"))) {
         runTest(loadRanges, createExtents(Iterables.concat(requiredRows, otherRows)));
       }
     }
@@ -168,14 +167,14 @@
         Set<String> rows2 = new HashSet<>(rows);
         rows2.remove(row);
         // test will all but one of the rows in the load mapping
-        for (Set<String> otherRows : Sets.powerSet(ImmutableSet.of("a", "c", "q", "t", "x"))) {
+        for (Set<String> otherRows : Sets.powerSet(Set.of("a", "c", "q", "t", "x"))) {
           runExceptionTest(loadRanges, createExtents(Iterables.concat(rows2, otherRows)));
         }
       }
 
       if (rows.size() > 1) {
         // test with none of the rows in the load mapping
-        for (Set<String> otherRows : Sets.powerSet(ImmutableSet.of("a", "c", "q", "t", "x"))) {
+        for (Set<String> otherRows : Sets.powerSet(Set.of("a", "c", "q", "t", "x"))) {
           runExceptionTest(loadRanges, createExtents(otherRows));
         }
       }
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/RootFilesTest.java b/server/master/src/test/java/org/apache/accumulo/master/upgrade/RootFilesUpgradeTest.java
similarity index 73%
rename from server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/RootFilesTest.java
rename to server/master/src/test/java/org/apache/accumulo/master/upgrade/RootFilesUpgradeTest.java
index 3730306..686a12e 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/RootFilesTest.java
+++ b/server/master/src/test/java/org/apache/accumulo/master/upgrade/RootFilesUpgradeTest.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.tserver.tablet;
+package org.apache.accumulo.master.upgrade;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -42,12 +42,18 @@
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 
 @SuppressFBWarnings(value = "PATH_TRAVERSAL_IN", justification = "paths not set by user input")
-public class RootFilesTest {
+public class RootFilesUpgradeTest {
 
   @Rule
   public TemporaryFolder tempFolder =
       new TemporaryFolder(new File(System.getProperty("user.dir") + "/target"));
 
+  static void rename(VolumeManager fs, Path src, Path dst) throws IOException {
+    if (!fs.rename(src, dst)) {
+      throw new IOException("Rename " + src + " to " + dst + " returned false ");
+    }
+  }
+
   private class TestWrapper {
     File rootTabletDir;
     Set<FileRef> oldDatafiles;
@@ -57,6 +63,35 @@
     VolumeManager vm;
     AccumuloConfiguration conf;
 
+    public void prepareReplacement(VolumeManager fs, Path location, Set<FileRef> oldDatafiles,
+        String compactName) throws IOException {
+      for (FileRef ref : oldDatafiles) {
+        Path path = ref.path();
+        rename(fs, path, new Path(location + "/delete+" + compactName + "+" + path.getName()));
+      }
+    }
+
+    public void renameReplacement(VolumeManager fs, FileRef tmpDatafile, FileRef newDatafile)
+        throws IOException {
+      if (fs.exists(newDatafile.path())) {
+        throw new IllegalStateException("Target map file already exist " + newDatafile);
+      }
+
+      rename(fs, tmpDatafile.path(), newDatafile.path());
+    }
+
+    public void finishReplacement(AccumuloConfiguration acuTableConf, VolumeManager fs,
+        Path location, Set<FileRef> oldDatafiles, String compactName) throws IOException {
+      // start deleting files, if we do not finish they will be cleaned
+      // up later
+      for (FileRef ref : oldDatafiles) {
+        Path path = ref.path();
+        Path deleteFile = new Path(location + "/delete+" + compactName + "+" + path.getName());
+        if (acuTableConf.getBoolean(Property.GC_TRASH_IGNORE) || !fs.moveToTrash(deleteFile))
+          fs.deleteRecursively(deleteFile);
+      }
+    }
+
     TestWrapper(VolumeManager vm, AccumuloConfiguration conf, String compactName,
         String... inputFiles) throws IOException {
       this.vm = vm;
@@ -81,21 +116,20 @@
     }
 
     void prepareReplacement() throws IOException {
-      RootFiles.prepareReplacement(vm, new Path(rootTabletDir.toURI()), oldDatafiles, compactName);
+      prepareReplacement(vm, new Path(rootTabletDir.toURI()), oldDatafiles, compactName);
     }
 
     void renameReplacement() throws IOException {
-      RootFiles.renameReplacement(vm, tmpDatafile, newDatafile);
+      renameReplacement(vm, tmpDatafile, newDatafile);
     }
 
     public void finishReplacement() throws IOException {
-      RootFiles.finishReplacement(conf, vm, new Path(rootTabletDir.toURI()), oldDatafiles,
-          compactName);
+      finishReplacement(conf, vm, new Path(rootTabletDir.toURI()), oldDatafiles, compactName);
     }
 
     public Collection<String> cleanupReplacement(String... expectedFiles) throws IOException {
       Collection<String> ret =
-          RootFiles.cleanupReplacement(vm, vm.listStatus(new Path(rootTabletDir.toURI())), true);
+          Upgrader9to10.cleanupRootTabletFiles(vm, rootTabletDir.toString()).keySet();
 
       HashSet<String> expected = new HashSet<>();
       for (String efile : expectedFiles)
diff --git a/server/monitor/pom.xml b/server/monitor/pom.xml
index 01d44cd..f68c390 100644
--- a/server/monitor/pom.xml
+++ b/server/monitor/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-monitor</artifactId>
@@ -73,10 +73,6 @@
       <artifactId>accumulo-tracer</artifactId>
     </dependency>
     <dependency>
-      <groupId>org.apache.commons</groupId>
-      <artifactId>commons-lang3</artifactId>
-    </dependency>
-    <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-client-api</artifactId>
     </dependency>
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/EmbeddedWebServer.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/EmbeddedWebServer.java
index e96e1f9..6eafdff 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/EmbeddedWebServer.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/EmbeddedWebServer.java
@@ -20,7 +20,6 @@
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.commons.lang3.StringUtils;
 import org.eclipse.jetty.server.AbstractConnectionFactory;
 import org.eclipse.jetty.server.HttpConnectionFactory;
 import org.eclipse.jetty.server.Server;
@@ -80,17 +79,17 @@
 
       final String includedCiphers = conf.get(Property.MONITOR_SSL_INCLUDE_CIPHERS);
       if (!Property.MONITOR_SSL_INCLUDE_CIPHERS.getDefaultValue().equals(includedCiphers)) {
-        sslContextFactory.setIncludeCipherSuites(StringUtils.split(includedCiphers, ','));
+        sslContextFactory.setIncludeCipherSuites(includedCiphers.split(","));
       }
 
       final String excludedCiphers = conf.get(Property.MONITOR_SSL_EXCLUDE_CIPHERS);
       if (!Property.MONITOR_SSL_EXCLUDE_CIPHERS.getDefaultValue().equals(excludedCiphers)) {
-        sslContextFactory.setExcludeCipherSuites(StringUtils.split(excludedCiphers, ','));
+        sslContextFactory.setExcludeCipherSuites(excludedCiphers.split(","));
       }
 
       final String includeProtocols = conf.get(Property.MONITOR_SSL_INCLUDE_PROTOCOLS);
       if (includeProtocols != null && !includeProtocols.isEmpty()) {
-        sslContextFactory.setIncludeProtocols(StringUtils.split(includeProtocols, ','));
+        sslContextFactory.setIncludeProtocols(includeProtocols.split(","));
       }
 
       SslConnectionFactory sslFactory =
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/logs/LogResource.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/logs/LogResource.java
index 490193f..8e0a31d 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/logs/LogResource.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/logs/LogResource.java
@@ -27,7 +27,6 @@
 
 import org.apache.accumulo.server.monitor.DedupedLogEvent;
 import org.apache.accumulo.server.monitor.LogService;
-import org.apache.commons.lang3.StringUtils;
 import org.apache.log4j.spi.LoggingEvent;
 
 /**
@@ -56,7 +55,9 @@
         application = "";
       String msg = ev.getMessage().toString();
       // truncate if full hadoop errors get logged as a message
-      msg = StringUtils.abbreviate(sanitize(msg), 300);
+      msg = sanitize(msg);
+      if (msg.length() > 300)
+        msg = msg.substring(0, 300);
 
       String[] stacktrace = ev.getThrowableStrRep();
       if (stacktrace != null)
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/problems/ProblemsResource.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/problems/ProblemsResource.java
index 8ffe0ad..60be57b 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/problems/ProblemsResource.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/problems/ProblemsResource.java
@@ -42,7 +42,6 @@
 import org.apache.accumulo.server.problems.ProblemReport;
 import org.apache.accumulo.server.problems.ProblemReports;
 import org.apache.accumulo.server.problems.ProblemType;
-import org.apache.commons.lang3.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -109,7 +108,7 @@
       ProblemReports.getInstance(monitor.getContext()).deleteProblemReports(TableId.of(tableID));
     } catch (Exception e) {
       log.error("Failed to delete problem reports for table "
-          + (StringUtils.isEmpty(tableID) ? StringUtils.EMPTY : sanitize(tableID)), e);
+          + (tableID.isEmpty() ? "" : sanitize(tableID)), e);
     }
   }
 
@@ -169,7 +168,7 @@
           ProblemType.valueOf(ptype), resource);
     } catch (Exception e) {
       log.error("Failed to delete problem reports for table "
-          + (StringUtils.isBlank(tableID) ? "" : sanitize(tableID)), e);
+          + (tableID.isBlank() ? "" : sanitize(tableID)), e);
     }
   }
 
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/tables/TablesResource.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/tables/TablesResource.java
index 2b02804..d9f2e05 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/tables/TablesResource.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/tables/TablesResource.java
@@ -51,7 +51,6 @@
 import org.apache.accumulo.server.master.state.TabletLocationState;
 import org.apache.accumulo.server.tables.TableManager;
 import org.apache.accumulo.server.util.TableInfoUtil;
-import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.io.Text;
 
 /**
@@ -138,7 +137,7 @@
 
     TabletServers tabletServers = new TabletServers(mmi.tServerInfo.size());
 
-    if (StringUtils.isBlank(tableIdStr)) {
+    if (tableIdStr.isBlank()) {
       return tabletServers;
     }
 
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/trace/TracesResource.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/trace/TracesResource.java
index 876314d..d147d9f 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/trace/TracesResource.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/rest/trace/TracesResource.java
@@ -64,7 +64,6 @@
 import org.apache.accumulo.tracer.TraceFormatter;
 import org.apache.accumulo.tracer.thrift.Annotation;
 import org.apache.accumulo.tracer.thrift.RemoteSpan;
-import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.security.UserGroupInformation;
 
@@ -280,10 +279,6 @@
     long startTime = endTime - millisSince;
 
     String startHexTime = Long.toHexString(startTime), endHexTime = Long.toHexString(endTime);
-    if (startHexTime.length() < endHexTime.length()) {
-      StringUtils.leftPad(startHexTime, endHexTime.length(), '0');
-    }
-
     return new Range(new Text("start:" + startHexTime), new Text("start:" + endHexTime));
   }
 
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/util/AccumuloMonitorAppender.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/util/AccumuloMonitorAppender.java
index da37c12..d8c582d 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/util/AccumuloMonitorAppender.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/util/AccumuloMonitorAppender.java
@@ -145,7 +145,7 @@
     public MonitorLocation get() {
       // lazily set up path and zooCache (see comment in constructor)
       if (this.context == null) {
-        this.context = new ServerContext(new SiteConfiguration());
+        this.context = new ServerContext(SiteConfiguration.auto());
         this.path = context.getZooKeeperRoot() + Constants.ZMONITOR_LOG4J_ADDR;
         this.zooCache = context.getZooCache();
       }
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/view/WebViews.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/view/WebViews.java
index df874ba..9e786b7 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/view/WebViews.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/view/WebViews.java
@@ -21,8 +21,6 @@
 import static org.apache.accumulo.monitor.util.ParameterValidator.ALPHA_NUM_REGEX_TABLE_ID;
 import static org.apache.accumulo.monitor.util.ParameterValidator.HOSTNAME_PORT_REGEX;
 import static org.apache.accumulo.monitor.util.ParameterValidator.RESOURCE_REGEX;
-import static org.apache.commons.lang3.StringUtils.isEmpty;
-import static org.apache.commons.lang3.StringUtils.isNotBlank;
 
 import java.io.IOException;
 import java.util.ArrayList;
@@ -80,7 +78,7 @@
   private void addExternalResources(Map<String,Object> model) {
     AccumuloConfiguration conf = monitor.getContext().getConfiguration();
     String resourcesProperty = conf.get(Property.MONITOR_RESOURCES_EXTERNAL);
-    if (isEmpty(resourcesProperty)) {
+    if (resourcesProperty.isBlank()) {
       return;
     }
     List<String> monitorResources = new ArrayList<>();
@@ -161,7 +159,7 @@
 
     Map<String,Object> model = getModel();
     model.put("title", "Tablet Server Status");
-    if (isNotBlank(server)) {
+    if (server != null && !server.isBlank()) {
       model.put("template", "server.ftl");
       model.put("js", "server.js");
       model.put("server", server);
@@ -379,7 +377,7 @@
     model.put("template", "problems.ftl");
     model.put("js", "problems.js");
 
-    if (isNotBlank(table)) {
+    if (table != null && !table.isBlank()) {
       model.put("table", table);
     }
 
diff --git a/server/native/pom.xml b/server/native/pom.xml
index 0042387..fd230a2 100644
--- a/server/native/pom.xml
+++ b/server/native/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-native</artifactId>
diff --git a/server/tracer/pom.xml b/server/tracer/pom.xml
index c54a4fe..3ee5ea6 100644
--- a/server/tracer/pom.xml
+++ b/server/tracer/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-tracer</artifactId>
diff --git a/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceServer.java b/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceServer.java
index fd6b02a..d029b53 100644
--- a/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceServer.java
+++ b/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceServer.java
@@ -233,18 +233,13 @@
    * misconfigurations that aren't likely to change on retry).
    *
    * @return a working Connection that can be reused
-   * @throws ClassNotFoundException
-   *           if TRACE_TOKEN_TYPE is set to a class that we can't load.
-   * @throws InstantiationException
+   * @throws ReflectiveOperationException
    *           if we fail to create an instance of TRACE_TOKEN_TYPE.
-   * @throws IllegalAccessException
-   *           if the class pointed to by TRACE_TOKEN_TYPE is private.
    * @throws AccumuloSecurityException
    *           if the trace user has the wrong permissions
    */
   private AccumuloClient ensureTraceTableExists(final AccumuloConfiguration conf)
-      throws AccumuloSecurityException, ClassNotFoundException, InstantiationException,
-      IllegalAccessException {
+      throws AccumuloSecurityException, ReflectiveOperationException {
     AccumuloClient accumuloClient = null;
     while (true) {
       try {
@@ -266,7 +261,7 @@
           Properties props = new Properties();
           AuthenticationToken token =
               AccumuloVFSClassLoader.getClassLoader().loadClass(conf.get(Property.TRACE_TOKEN_TYPE))
-                  .asSubclass(AuthenticationToken.class).newInstance();
+                  .asSubclass(AuthenticationToken.class).getDeclaredConstructor().newInstance();
 
           int prefixLength = Property.TRACE_TOKEN_PROPERTY_PREFIX.getKey().length();
           for (Entry<String,String> entry : loginMap.entrySet()) {
diff --git a/server/tserver/pom.xml b/server/tserver/pom.xml
index 758d487..630d018 100644
--- a/server/tserver/pom.xml
+++ b/server/tserver/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-tserver</artifactId>
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMap.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMap.java
index 45bbee6..1c8f2b4 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMap.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMap.java
@@ -17,6 +17,7 @@
 package org.apache.accumulo.tserver;
 
 import java.io.File;
+import java.lang.ref.Cleaner.Cleanable;
 import java.util.AbstractMap.SimpleImmutableEntry;
 import java.util.ArrayList;
 import java.util.Collection;
@@ -28,6 +29,7 @@
 import java.util.Map.Entry;
 import java.util.NoSuchElementException;
 import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicLong;
 import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
@@ -179,7 +181,7 @@
     return false;
   }
 
-  private long nmPointer;
+  private final AtomicLong nmPtr = new AtomicLong(0);
 
   private final ReadWriteLock rwLock;
   private final Lock rlock;
@@ -248,6 +250,12 @@
     }
   }
 
+  // package private visibility for NativeMapCleanerUtil use,
+  // without affecting ABI of existing native interface
+  static void _deleteNativeMap(long nmPtr) {
+    deleteNativeMap(nmPtr);
+  }
+
   private static native long createNMI(long nmp, int[] fieldLens);
 
   private static native long createNMI(long nmp, byte[] row, byte[] cf, byte[] cq, byte[] cv,
@@ -262,6 +270,12 @@
 
   private static native void deleteNMI(long nmiPointer);
 
+  // package private visibility for NativeMapCleanerUtil use,
+  // without affecting ABI of existing native interface
+  static void _deleteNMI(long nmiPointer) {
+    deleteNMI(nmiPointer);
+  }
+
   private class ConcurrentIterator implements Iterator<Map.Entry<Key,Value>> {
 
     // in order to get good performance when there are multiple threads reading, need to read a lot
@@ -363,11 +377,6 @@
       return ret;
     }
 
-    @Override
-    public void remove() {
-      throw new UnsupportedOperationException();
-    }
-
     public void delete() {
       source.delete();
     }
@@ -385,40 +394,42 @@
      *
      */
 
-    private long nmiPointer;
+    private final AtomicLong nmiPtr = new AtomicLong(0);
     private boolean hasNext;
     private int expectedModCount;
     private int[] fieldsLens = new int[7];
     private byte[] lastRow;
+    private final Cleanable cleanableNMI;
 
     // it is assumed the read lock is held when this method is called
     NMIterator(Key key) {
 
-      if (nmPointer == 0) {
-        throw new IllegalStateException();
-      }
+      final long nmPointer = nmPtr.get();
+      checkDeletedNM(nmPointer);
 
       expectedModCount = modCount;
 
-      nmiPointer = createNMI(nmPointer, key.getRowData().toArray(),
+      final long nmiPointer = createNMI(nmPointer, key.getRowData().toArray(),
           key.getColumnFamilyData().toArray(), key.getColumnQualifierData().toArray(),
           key.getColumnVisibilityData().toArray(), key.getTimestamp(), key.isDeleted(), fieldsLens);
 
       hasNext = nmiPointer != 0;
+
+      nmiPtr.set(nmiPointer);
+      cleanableNMI = NativeMapCleanerUtil.deleteNMIterator(this, nmiPtr);
     }
 
     // delete is synchronized on a per iterator basis want to ensure only one
     // thread deletes an iterator w/o acquiring the global write lock...
     // there is no contention among concurrent readers for deleting their iterators
     public synchronized void delete() {
-      if (nmiPointer == 0) {
-        return;
+      final long nmiPointer = nmiPtr.getAndSet(0);
+      if (nmiPointer != 0) {
+        // deregister cleanable, but it won't run because it checks
+        // the value of nmiPtr first, which is now 0
+        cleanableNMI.clean();
+        deleteNMI(nmiPointer);
       }
-
-      // log.debug("Deleting native map iterator pointer");
-
-      deleteNMI(nmiPointer);
-      nmiPointer = 0;
     }
 
     @Override
@@ -429,10 +440,7 @@
     // it is assumed the read lock is held when this method is called
     // this method only needs to be called once per read lock acquisition
     private void doNextPreCheck() {
-      if (nmPointer == 0) {
-        throw new IllegalStateException();
-      }
-
+      checkDeletedNM(nmPtr.get());
       if (modCount != expectedModCount) {
         throw new ConcurrentModificationException();
       }
@@ -448,6 +456,7 @@
         throw new NoSuchElementException();
       }
 
+      final long nmiPointer = nmiPtr.get();
       if (nmiPointer == 0) {
         throw new IllegalStateException("Native Map Iterator Deleted");
       }
@@ -475,41 +484,28 @@
       return new SimpleImmutableEntry<>(k, v);
     }
 
-    @Override
-    public void remove() {
-      throw new UnsupportedOperationException();
-    }
-
-    @Override
-    protected void finalize() throws Throwable {
-      super.finalize();
-      if (nmiPointer != 0) {
-        // log.debug("Deleting native map iterator pointer in finalize");
-        deleteNMI(nmiPointer);
-      }
-    }
-
   }
 
+  private final Cleanable cleanableNM;
+
   public NativeMap() {
-    nmPointer = createNativeMap();
+    final long nmPointer = createNativeMap();
+    nmPtr.set(nmPointer);
+    cleanableNM = NativeMapCleanerUtil.deleteNM(this, log, nmPtr);
     rwLock = new ReentrantReadWriteLock();
     rlock = rwLock.readLock();
     wlock = rwLock.writeLock();
     log.debug(String.format("Allocated native map 0x%016x", nmPointer));
   }
 
-  @Override
-  protected void finalize() throws Throwable {
-    super.finalize();
-    if (nmPointer != 0) {
-      log.warn(String.format("Deallocating native map 0x%016x in finalize", nmPointer));
-      deleteNativeMap(nmPointer);
+  private static void checkDeletedNM(final long nmPointer) {
+    if (nmPointer == 0) {
+      throw new IllegalStateException("Native Map Deleted");
     }
   }
 
-  private int _mutate(Mutation mutation, int mutationCount) {
-
+  // assumes wlock
+  private int _mutate(final long nmPointer, Mutation mutation, int mutationCount) {
     List<ColumnUpdate> updates = mutation.getUpdates();
     if (updates.size() == 1) {
       ColumnUpdate update = updates.get(0);
@@ -534,16 +530,15 @@
 
       wlock.lock();
       try {
-        if (nmPointer == 0) {
-          throw new IllegalStateException("Native Map Deleted");
-        }
+        final long nmPointer = nmPtr.get();
+        checkDeletedNM(nmPointer);
 
         modCount++;
 
         int count = 0;
         while (iter.hasNext() && count < 10) {
           Mutation mutation = iter.next();
-          mutationCount = _mutate(mutation, mutationCount);
+          mutationCount = _mutate(nmPointer, mutation, mutationCount);
           count += mutation.size();
         }
       } finally {
@@ -556,9 +551,8 @@
   public void put(Key key, Value value) {
     wlock.lock();
     try {
-      if (nmPointer == 0) {
-        throw new IllegalStateException("Native Map Deleted");
-      }
+      final long nmPointer = nmPtr.get();
+      checkDeletedNM(nmPointer);
 
       modCount++;
 
@@ -593,10 +587,8 @@
   public int size() {
     rlock.lock();
     try {
-      if (nmPointer == 0) {
-        throw new IllegalStateException("Native Map Deleted");
-      }
-
+      final long nmPointer = nmPtr.get();
+      checkDeletedNM(nmPointer);
       return sizeNM(nmPointer);
     } finally {
       rlock.unlock();
@@ -606,10 +598,8 @@
   public long getMemoryUsed() {
     rlock.lock();
     try {
-      if (nmPointer == 0) {
-        throw new IllegalStateException("Native Map Deleted");
-      }
-
+      final long nmPointer = nmPtr.get();
+      checkDeletedNM(nmPointer);
       return memoryUsedNM(nmPointer);
     } finally {
       rlock.unlock();
@@ -620,10 +610,8 @@
   public Iterator<Map.Entry<Key,Value>> iterator() {
     rlock.lock();
     try {
-      if (nmPointer == 0) {
-        throw new IllegalStateException("Native Map Deleted");
-      }
-
+      final long nmPointer = nmPtr.get();
+      checkDeletedNM(nmPointer);
       return new ConcurrentIterator();
     } finally {
       rlock.unlock();
@@ -633,11 +621,8 @@
   public Iterator<Map.Entry<Key,Value>> iterator(Key startKey) {
     rlock.lock();
     try {
-
-      if (nmPointer == 0) {
-        throw new IllegalStateException("Native Map Deleted");
-      }
-
+      final long nmPointer = nmPtr.get();
+      checkDeletedNM(nmPointer);
       return new ConcurrentIterator(startKey);
     } finally {
       rlock.unlock();
@@ -647,13 +632,13 @@
   public void delete() {
     wlock.lock();
     try {
-      if (nmPointer == 0) {
-        throw new IllegalStateException("Native Map Deleted");
-      }
-
+      final long nmPointer = nmPtr.getAndSet(0);
+      checkDeletedNM(nmPointer);
+      // deregister cleanable, but it won't run because it checks
+      // the value of nmPtr first, which is now 0
+      cleanableNM.clean();
       log.debug(String.format("Deallocating native map 0x%016x", nmPointer));
       deleteNativeMap(nmPointer);
-      nmPointer = 0;
     } finally {
       wlock.unlock();
     }
@@ -704,7 +689,7 @@
     public void next() {
 
       if (entry == null)
-        throw new IllegalStateException();
+        throw new NoSuchElementException();
 
       // checking the interrupt flag for every call to next had bad a bad performance impact
       // so check it every 100th time
@@ -753,7 +738,7 @@
     @Override
     public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options,
         IteratorEnvironment env) {
-      throw new UnsupportedOperationException();
+      throw new UnsupportedOperationException("init");
     }
 
     @Override
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMapCleanerUtil.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMapCleanerUtil.java
new file mode 100644
index 0000000..4cd5d92
--- /dev/null
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMapCleanerUtil.java
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.tserver;
+
+import static java.util.Objects.requireNonNull;
+
+import java.lang.ref.Cleaner.Cleanable;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.accumulo.core.util.cleaner.CleanerUtil;
+import org.slf4j.Logger;
+
+/**
+ * A cleaner utility for NativeMap code, in the same spirit as {@link CleanerUtil}.
+ */
+public class NativeMapCleanerUtil {
+
+  public static Cleanable deleteNM(Object obj, Logger log, AtomicLong nmPtr) {
+    requireNonNull(nmPtr);
+    requireNonNull(log);
+    return CleanerUtil.CLEANER.register(obj, () -> {
+      long nmPointer = nmPtr.get();
+      if (nmPointer != 0) {
+        log.warn(String.format("Deallocating native map 0x%016x in finalize", nmPointer));
+        NativeMap._deleteNativeMap(nmPointer);
+      }
+    });
+  }
+
+  public static Cleanable deleteNMIterator(Object obj, AtomicLong nmiPtr) {
+    requireNonNull(nmiPtr);
+    return CleanerUtil.CLEANER.register(obj, () -> {
+      long nmiPointer = nmiPtr.get();
+      if (nmiPointer != 0) {
+        NativeMap._deleteNMI(nmiPointer);
+      }
+    });
+  }
+
+}
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/RowLocks.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/RowLocks.java
index fed66fb..10885ea 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/RowLocks.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/RowLocks.java
@@ -17,10 +17,11 @@
 package org.apache.accumulo.tserver;
 
 import java.util.ArrayList;
-import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
+import java.util.Objects;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.locks.ReentrantLock;
 
 import org.apache.accumulo.core.data.ArrayByteSequence;
@@ -29,9 +30,15 @@
 import org.apache.accumulo.tserver.ConditionalMutationSet.DeferFilter;
 import org.apache.accumulo.tserver.data.ServerConditionalMutation;
 
+import com.google.common.base.Preconditions;
+
 class RowLocks {
 
-  private Map<ByteSequence,RowLock> rowLocks = new HashMap<>();
+  // The compute function in Concurrent Hash Map supports atomic execution of the remapping function
+  // and will only execute it once. Properly computing the reference counts relies on this specific
+  // behavior. Not all concurrent map implementations have the desired behavior. For example
+  // ConcurrentSkipListMap.compute is not atomic and may execute the function multiple times.
+  private final Map<ByteSequence,RowLock> rowLocks = new ConcurrentHashMap<>();
 
   static class RowLock {
     ReentrantLock rlock;
@@ -40,7 +47,7 @@
 
     RowLock(ReentrantLock rlock, ByteSequence rowSeq) {
       this.rlock = rlock;
-      this.count = 0;
+      this.count = 1;
       this.rowSeq = rowSeq;
     }
 
@@ -58,43 +65,39 @@
   }
 
   private RowLock getRowLock(ArrayByteSequence rowSeq) {
-    RowLock lock = rowLocks.get(rowSeq);
-    if (lock == null) {
-      lock = new RowLock(new ReentrantLock(), rowSeq);
-      rowLocks.put(rowSeq, lock);
-    }
-
-    lock.count++;
-    return lock;
+    return rowLocks.compute(rowSeq, (key, value) -> {
+      if (value == null) {
+        return new RowLock(new ReentrantLock(), rowSeq);
+      }
+      value.count++;
+      return value;
+    });
   }
 
   private void returnRowLock(RowLock lock) {
-    if (lock.count == 0)
-      throw new IllegalStateException();
-    lock.count--;
-
-    if (lock.count == 0) {
-      rowLocks.remove(lock.rowSeq);
-    }
+    Objects.requireNonNull(lock);
+    rowLocks.compute(lock.rowSeq, (key, value) -> {
+      Preconditions.checkState(value == lock);
+      Preconditions.checkState(value.count > 0);
+      return (--value.count > 0) ? value : null;
+    });
   }
 
   List<RowLock> acquireRowlocks(Map<KeyExtent,List<ServerConditionalMutation>> updates,
       Map<KeyExtent,List<ServerConditionalMutation>> deferred) {
     ArrayList<RowLock> locks = new ArrayList<>();
 
-    // assume that mutations are in sorted order to avoid deadlock
-    synchronized (rowLocks) {
-      for (List<ServerConditionalMutation> scml : updates.values()) {
-        for (ServerConditionalMutation scm : scml) {
-          locks.add(getRowLock(new ArrayByteSequence(scm.getRow())));
-        }
+    for (List<ServerConditionalMutation> scml : updates.values()) {
+      for (ServerConditionalMutation scm : scml) {
+        locks.add(getRowLock(new ArrayByteSequence(scm.getRow())));
       }
     }
 
     HashSet<ByteSequence> rowsNotLocked = null;
 
-    // acquire as many locks as possible, not blocking on rows that are already locked
     if (locks.size() > 1) {
+      // Assuming mutations are in sorted order which avoids deadlock. Acquire as many locks as
+      // possible, not blocking on rows that are already locked.
       for (RowLock rowLock : locks) {
         if (!rowLock.tryLock()) {
           if (rowsNotLocked == null)
@@ -135,10 +138,8 @@
         }
       }
 
-      synchronized (rowLocks) {
-        for (RowLock rowLock : locksToReturn) {
-          returnRowLock(rowLock);
-        }
+      for (RowLock rowLock : locksToReturn) {
+        returnRowLock(rowLock);
       }
 
       locks = filteredLocks;
@@ -151,10 +152,8 @@
       rowLock.unlock();
     }
 
-    synchronized (rowLocks) {
-      for (RowLock rowLock : locks) {
-        returnRowLock(rowLock);
-      }
+    for (RowLock rowLock : locks) {
+      returnRowLock(rowLock);
     }
   }
 
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/TConstraintViolationException.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/TConstraintViolationException.java
deleted file mode 100644
index a05f4e7..0000000
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/TConstraintViolationException.java
+++ /dev/null
@@ -1,55 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.tserver;
-
-import java.util.List;
-
-import org.apache.accumulo.core.constraints.Violations;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.tserver.tablet.CommitSession;
-
-public class TConstraintViolationException extends Exception {
-  private static final long serialVersionUID = 1L;
-  private final Violations violations;
-  private final List<Mutation> violators;
-  private final List<Mutation> nonViolators;
-  private final CommitSession commitSession;
-
-  public TConstraintViolationException(Violations violations, List<Mutation> violators,
-      List<Mutation> nonViolators, CommitSession commitSession) {
-    this.violations = violations;
-    this.violators = violators;
-    this.nonViolators = nonViolators;
-    this.commitSession = commitSession;
-  }
-
-  Violations getViolations() {
-    return violations;
-  }
-
-  List<Mutation> getViolators() {
-    return violators;
-  }
-
-  List<Mutation> getNonViolators() {
-    return nonViolators;
-  }
-
-  CommitSession getCommitSession() {
-    return commitSession;
-  }
-}
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java
index d349b7b..0d49dd4 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java
@@ -39,7 +39,6 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
-import java.util.Objects;
 import java.util.Random;
 import java.util.Set;
 import java.util.SortedMap;
@@ -69,7 +68,6 @@
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.clientImpl.CompressedIterators;
 import org.apache.accumulo.core.clientImpl.DurabilityImpl;
-import org.apache.accumulo.core.clientImpl.ScannerImpl;
 import org.apache.accumulo.core.clientImpl.Tables;
 import org.apache.accumulo.core.clientImpl.TabletLocator;
 import org.apache.accumulo.core.clientImpl.TabletType;
@@ -90,7 +88,6 @@
 import org.apache.accumulo.core.data.NamespaceId;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.TableId;
-import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.dataImpl.KeyExtent;
 import org.apache.accumulo.core.dataImpl.thrift.InitialMultiScan;
 import org.apache.accumulo.core.dataImpl.thrift.InitialScan;
@@ -120,7 +117,9 @@
 import org.apache.accumulo.core.master.thrift.TabletServerStatus;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.RootTable;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.Location;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.LocationType;
 import org.apache.accumulo.core.replication.ReplicationConstants;
 import org.apache.accumulo.core.replication.thrift.ReplicationServicer;
 import org.apache.accumulo.core.rpc.ThriftUtil;
@@ -149,7 +148,6 @@
 import org.apache.accumulo.core.trace.TraceUtil;
 import org.apache.accumulo.core.trace.thrift.TInfo;
 import org.apache.accumulo.core.util.ByteBufferUtil;
-import org.apache.accumulo.core.util.ColumnFQ;
 import org.apache.accumulo.core.util.ComparablePair;
 import org.apache.accumulo.core.util.Daemon;
 import org.apache.accumulo.core.util.HostAndPort;
@@ -193,7 +191,6 @@
 import org.apache.accumulo.server.master.state.TabletLocationState;
 import org.apache.accumulo.server.master.state.TabletLocationState.BadLocationStateException;
 import org.apache.accumulo.server.master.state.TabletStateStore;
-import org.apache.accumulo.server.master.state.ZooTabletStateStore;
 import org.apache.accumulo.server.master.tableOps.UserCompactionConfig;
 import org.apache.accumulo.server.problems.ProblemReport;
 import org.apache.accumulo.server.problems.ProblemReports;
@@ -210,7 +207,6 @@
 import org.apache.accumulo.server.util.FileSystemMonitor;
 import org.apache.accumulo.server.util.Halt;
 import org.apache.accumulo.server.util.MasterMetadataUtil;
-import org.apache.accumulo.server.util.MetadataTableUtil;
 import org.apache.accumulo.server.util.ServerBulkImportStatus;
 import org.apache.accumulo.server.util.time.RelativeTime;
 import org.apache.accumulo.server.util.time.SimpleTimer;
@@ -253,6 +249,7 @@
 import org.apache.accumulo.tserver.tablet.CompactionWatcher;
 import org.apache.accumulo.tserver.tablet.Compactor;
 import org.apache.accumulo.tserver.tablet.KVEntry;
+import org.apache.accumulo.tserver.tablet.PreparedMutations;
 import org.apache.accumulo.tserver.tablet.ScanBatch;
 import org.apache.accumulo.tserver.tablet.Tablet;
 import org.apache.accumulo.tserver.tablet.TabletClosedException;
@@ -1078,38 +1075,33 @@
             try {
               updateMetrics.addMutationArraySize(mutations.size());
 
-              CommitSession commitSession = tablet.prepareMutationsForCommit(us.cenv, mutations);
-              if (commitSession == null) {
+              PreparedMutations prepared = tablet.prepareMutationsForCommit(us.cenv, mutations);
+
+              if (prepared.tabletClosed()) {
                 if (us.currentTablet == tablet) {
                   us.currentTablet = null;
                 }
                 us.failures.put(tablet.getExtent(), us.successfulCommits.get(tablet));
               } else {
-                if (durability != Durability.NONE) {
-                  loggables.put(commitSession,
-                      new TabletMutations(commitSession, mutations, durability));
+                if (!prepared.getNonViolators().isEmpty()) {
+                  List<Mutation> validMutations = prepared.getNonViolators();
+                  CommitSession session = prepared.getCommitSession();
+                  if (durability != Durability.NONE) {
+                    loggables.put(session,
+                        new TabletMutations(session, validMutations, durability));
+                  }
+                  sendables.put(session, validMutations);
                 }
-                sendables.put(commitSession, mutations);
+
+                if (!prepared.getViolations().isEmpty()) {
+                  us.violations.add(prepared.getViolations());
+                  updateMetrics.addConstraintViolations(0);
+                }
+                // Use the size of the original mutation list, regardless of how many mutations
+                // did not violate constraints.
                 mutationCount += mutations.size();
+
               }
-
-            } catch (TConstraintViolationException e) {
-              us.violations.add(e.getViolations());
-              updateMetrics.addConstraintViolations(0);
-
-              if (e.getNonViolators().size() > 0) {
-                // only log and commit mutations if there were some
-                // that did not violate constraints... this is what
-                // prepareMutationsForCommit() expects
-                CommitSession cs = e.getCommitSession();
-                if (durability != Durability.NONE) {
-                  loggables.put(cs, new TabletMutations(cs, e.getNonViolators(), durability));
-                }
-                sendables.put(cs, e.getNonViolators());
-              }
-
-              mutationCount += mutations.size();
-
             } catch (Throwable t) {
               error = t;
               log.error("Unexpected error preparing for commit", error);
@@ -1284,35 +1276,38 @@
         final Mutation mutation = new ServerMutation(tmutation);
         final List<Mutation> mutations = Collections.singletonList(mutation);
 
-        CommitSession cs;
+        PreparedMutations prepared;
         try (TraceScope prep = Trace.startSpan("prep")) {
-          cs = tablet.prepareMutationsForCommit(
+          prepared = tablet.prepareMutationsForCommit(
               new TservConstraintEnv(getContext(), security, credentials), mutations);
         }
-        if (cs == null) {
-          throw new NotServingTabletException(tkeyExtent);
-        }
 
-        Durability durability = DurabilityImpl
-            .resolveDurabilty(DurabilityImpl.fromThrift(tdurability), tabletDurability);
-        // instead of always looping on true, skip completely when durability is NONE
-        while (durability != Durability.NONE) {
-          try {
-            try (TraceScope wal = Trace.startSpan("wal")) {
-              logger.log(cs, mutation, durability);
+        if (prepared.tabletClosed()) {
+          throw new NotServingTabletException(tkeyExtent);
+        } else if (!prepared.getViolators().isEmpty()) {
+          throw new ConstraintViolationException(
+              Translator.translate(prepared.getViolations().asList(), Translators.CVST));
+        } else {
+          CommitSession session = prepared.getCommitSession();
+          Durability durability = DurabilityImpl
+              .resolveDurabilty(DurabilityImpl.fromThrift(tdurability), tabletDurability);
+
+          // Instead of always looping on true, skip completely when durability is NONE.
+          while (durability != Durability.NONE) {
+            try {
+              try (TraceScope wal = Trace.startSpan("wal")) {
+                logger.log(session, mutation, durability);
+              }
+              break;
+            } catch (IOException ex) {
+              log.warn("Error writing mutations to log", ex);
             }
-            break;
-          } catch (IOException ex) {
-            log.warn("Error writing mutations to log", ex);
+          }
+
+          try (TraceScope commit = Trace.startSpan("commit")) {
+            session.commit(mutations);
           }
         }
-
-        try (TraceScope commit = Trace.startSpan("commit")) {
-          cs.commit(mutations);
-        }
-      } catch (TConstraintViolationException e) {
-        throw new ConstraintViolationException(
-            Translator.translate(e.getViolations().asList(), Translators.CVST));
       } finally {
         writeTracker.finishWrite(opid);
       }
@@ -1390,52 +1385,36 @@
         for (Entry<KeyExtent,List<ServerConditionalMutation>> entry : es) {
           final Tablet tablet = getOnlineTablet(entry.getKey());
           if (tablet == null || tablet.isClosed() || sessionCanceled) {
-            for (ServerConditionalMutation scm : entry.getValue()) {
-              results.add(new TCMResult(scm.getID(), TCMStatus.IGNORED));
-            }
+            addMutationsAsTCMResults(results, entry.getValue(), TCMStatus.IGNORED);
           } else {
             final Durability durability =
                 DurabilityImpl.resolveDurabilty(sess.durability, tablet.getDurability());
-            try {
 
-              @SuppressWarnings("unchecked")
-              List<Mutation> mutations =
-                  (List<Mutation>) (List<? extends Mutation>) entry.getValue();
-              if (mutations.size() > 0) {
+            @SuppressWarnings("unchecked")
+            List<Mutation> mutations = (List<Mutation>) (List<? extends Mutation>) entry.getValue();
+            if (!mutations.isEmpty()) {
 
-                CommitSession cs = tablet.prepareMutationsForCommit(
-                    new TservConstraintEnv(getContext(), security, sess.credentials), mutations);
+              PreparedMutations prepared = tablet.prepareMutationsForCommit(
+                  new TservConstraintEnv(getContext(), security, sess.credentials), mutations);
 
-                if (cs == null) {
-                  for (ServerConditionalMutation scm : entry.getValue()) {
-                    results.add(new TCMResult(scm.getID(), TCMStatus.IGNORED));
-                  }
-                } else {
-                  for (ServerConditionalMutation scm : entry.getValue()) {
-                    results.add(new TCMResult(scm.getID(), TCMStatus.ACCEPTED));
-                  }
+              if (prepared.tabletClosed()) {
+                addMutationsAsTCMResults(results, mutations, TCMStatus.IGNORED);
+              } else {
+                if (!prepared.getNonViolators().isEmpty()) {
+                  // Only log and commit mutations that did not violate constraints.
+                  List<Mutation> validMutations = prepared.getNonViolators();
+                  addMutationsAsTCMResults(results, validMutations, TCMStatus.ACCEPTED);
+                  CommitSession session = prepared.getCommitSession();
                   if (durability != Durability.NONE) {
-                    loggables.put(cs, new TabletMutations(cs, mutations, durability));
+                    loggables.put(session,
+                        new TabletMutations(session, validMutations, durability));
                   }
-                  sendables.put(cs, mutations);
+                  sendables.put(session, validMutations);
                 }
-              }
-            } catch (TConstraintViolationException e) {
-              CommitSession cs = e.getCommitSession();
-              if (e.getNonViolators().size() > 0) {
-                if (durability != Durability.NONE) {
-                  loggables.put(cs, new TabletMutations(cs, e.getNonViolators(), durability));
-                }
-                sendables.put(cs, e.getNonViolators());
-                for (Mutation m : e.getNonViolators()) {
-                  results.add(
-                      new TCMResult(((ServerConditionalMutation) m).getID(), TCMStatus.ACCEPTED));
-                }
-              }
 
-              for (Mutation m : e.getViolators()) {
-                results.add(
-                    new TCMResult(((ServerConditionalMutation) m).getID(), TCMStatus.VIOLATED));
+                if (!prepared.getViolators().isEmpty()) {
+                  addMutationsAsTCMResults(results, prepared.getViolators(), TCMStatus.VIOLATED);
+                }
               }
             }
           }
@@ -1469,7 +1448,17 @@
         long t2 = System.currentTimeMillis();
         updateAvgCommitTime(t2 - t1, sendables.size());
       }
+    }
 
+    /**
+     * Transform and add each mutation as a {@link TCMResult} with the mutation's ID and the
+     * specified status to the {@link TCMResult} list.
+     */
+    private void addMutationsAsTCMResults(final List<TCMResult> list,
+        final Collection<? extends Mutation> mutations, final TCMStatus status) {
+      mutations.stream()
+          .map(mutation -> new TCMResult(((ServerConditionalMutation) mutation).getID(), status))
+          .forEach(list::add);
     }
 
     private Map<KeyExtent,List<ServerConditionalMutation>> conditionalUpdate(ConditionalSession cs,
@@ -2440,30 +2429,33 @@
 
       // check Metadata table before accepting assignment
       Text locationToOpen = null;
-      SortedMap<Key,Value> tabletsKeyValues = new TreeMap<>();
+      TabletMetadata tabletMetadata = null;
+      boolean canLoad = false;
       try {
-        Pair<Text,KeyExtent> pair =
-            verifyTabletInformation(getContext(), extent, TabletServer.this.getTabletSession(),
-                tabletsKeyValues, getClientAddressString(), getLock());
-        if (pair != null) {
-          locationToOpen = pair.getFirst();
-          if (pair.getSecond() != null) {
-            synchronized (openingTablets) {
-              openingTablets.remove(extent);
-              openingTablets.notifyAll();
-              // it expected that the new extent will overlap the old one... if it does not, it
-              // should not be added to unopenedTablets
-              if (!KeyExtent.findOverlapping(extent, new TreeSet<>(Arrays.asList(pair.getSecond())))
-                  .contains(pair.getSecond())) {
-                throw new IllegalStateException(
-                    "Fixed split does not overlap " + extent + " " + pair.getSecond());
-              }
-              unopenedTablets.add(pair.getSecond());
+        tabletMetadata = getContext().getAmple().readTablet(extent);
+
+        canLoad = checkTabletMetadata(extent, TabletServer.this.getTabletSession(), tabletMetadata);
+
+        if (canLoad && tabletMetadata.sawOldPrevEndRow()) {
+          KeyExtent fixedExtent =
+              MasterMetadataUtil.fixSplit(getContext(), tabletMetadata, getLock());
+
+          synchronized (openingTablets) {
+            openingTablets.remove(extent);
+            openingTablets.notifyAll();
+            // it expected that the new extent will overlap the old one... if it does not, it
+            // should not be added to unopenedTablets
+            if (!KeyExtent.findOverlapping(extent, new TreeSet<>(Arrays.asList(fixedExtent)))
+                .contains(fixedExtent)) {
+              throw new IllegalStateException(
+                  "Fixed split does not overlap " + extent + " " + fixedExtent);
             }
-            // split was rolled back... try again
-            new AssignmentHandler(pair.getSecond()).run();
-            return;
+            unopenedTablets.add(fixedExtent);
           }
+          // split was rolled back... try again
+          new AssignmentHandler(fixedExtent).run();
+          return;
+
         }
       } catch (Exception e) {
         synchronized (openingTablets) {
@@ -2475,7 +2467,7 @@
         throw new RuntimeException(e);
       }
 
-      if (locationToOpen == null) {
+      if (!canLoad) {
         log.debug("Reporting tablet {} assignment failure: unable to verify Tablet Information",
             extent);
         synchronized (openingTablets) {
@@ -2494,12 +2486,7 @@
 
         TabletResourceManager trm =
             resourceManager.createTabletResourceManager(extent, getTableConfiguration(extent));
-        TabletData data;
-        if (extent.isRootTablet()) {
-          data = new TabletData(getContext(), fs, getTableConfiguration(extent));
-        } else {
-          data = new TabletData(extent, fs, tabletsKeyValues.entrySet().iterator());
-        }
+        TabletData data = new TabletData(extent, fs, tabletMetadata);
 
         tablet = new Tablet(TabletServer.this, extent, trm, data);
         // If a minor compaction starts after a tablet opens, this indicates a log recovery
@@ -2961,155 +2948,37 @@
     SimpleTimer.getInstance(aconf).schedule(replicationWorkThreadPoolResizer, 10000, 30000);
   }
 
-  private static Pair<Text,KeyExtent> verifyRootTablet(ServerContext context,
-      TServerInstance instance) throws AccumuloException {
-    ZooTabletStateStore store = new ZooTabletStateStore(context);
-    if (!store.iterator().hasNext()) {
-      throw new AccumuloException("Illegal state: location is not set in zookeeper");
-    }
-    TabletLocationState next = store.iterator().next();
-    if (!instance.equals(next.future)) {
-      throw new AccumuloException("Future location is not to this server for the root tablet");
+  static boolean checkTabletMetadata(KeyExtent extent, TServerInstance instance,
+      TabletMetadata meta) throws AccumuloException {
+
+    if (!meta.sawPrevEndRow()) {
+      throw new AccumuloException("Metadata entry does not have prev row (" + meta.getTableId()
+          + " " + meta.getEndRow() + ")");
     }
 
-    if (next.current != null) {
-      throw new AccumuloException("Root tablet already has a location set");
+    if (!extent.equals(meta.getExtent())) {
+      log.info("Tablet extent mismatch {} {}", extent, meta.getExtent());
+      return false;
     }
 
-    try {
-      return new Pair<>(new Text(MetadataTableUtil.getRootTabletDir(context)), null);
-    } catch (IOException e) {
-      throw new AccumuloException(e);
-    }
-  }
-
-  public static Pair<Text,KeyExtent> verifyTabletInformation(ServerContext context,
-      KeyExtent extent, TServerInstance instance, final SortedMap<Key,Value> tabletsKeyValues,
-      String clientAddress, ZooLock lock) throws DistributedStoreException, AccumuloException {
-    Objects.requireNonNull(tabletsKeyValues);
-
-    log.debug("verifying extent {}", extent);
-    if (extent.isRootTablet()) {
-      return verifyRootTablet(context, instance);
-    }
-    TableId tableToVerify = MetadataTable.ID;
-    if (extent.isMeta()) {
-      tableToVerify = RootTable.ID;
+    if (meta.getDir() == null) {
+      throw new AccumuloException(
+          "Metadata entry does not have directory (" + meta.getExtent() + ")");
     }
 
-    List<ColumnFQ> columnsToFetch =
-        Arrays.asList(TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN,
-            TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN,
-            TabletsSection.TabletColumnFamily.SPLIT_RATIO_COLUMN,
-            TabletsSection.TabletColumnFamily.OLD_PREV_ROW_COLUMN,
-            TabletsSection.ServerColumnFamily.TIME_COLUMN);
-
-    TreeMap<Key,Value> tkv = new TreeMap<>();
-    try (ScannerImpl scanner = new ScannerImpl(context, tableToVerify, Authorizations.EMPTY)) {
-      scanner.setRange(extent.toMetadataRange());
-      for (Entry<Key,Value> entry : scanner) {
-        tkv.put(entry.getKey(), entry.getValue());
-      }
+    if (meta.getTime() == null && !extent.equals(RootTable.EXTENT)) {
+      throw new AccumuloException("Metadata entry does not have time (" + meta.getExtent() + ")");
     }
 
-    // only populate map after success
-    tabletsKeyValues.clear();
-    tabletsKeyValues.putAll(tkv);
+    Location loc = meta.getLocation();
 
-    Text metadataEntry = extent.getMetadataEntry();
-
-    Value dir = checkTabletMetadata(extent, instance, tabletsKeyValues, metadataEntry);
-    if (dir == null) {
-      return null;
+    if (loc == null || loc.getType() != LocationType.FUTURE
+        || !instance.equals(new TServerInstance(loc))) {
+      log.info("Unexpected location {} {}", extent, loc);
+      return false;
     }
 
-    Value oldPrevEndRow = null;
-    for (Entry<Key,Value> entry : tabletsKeyValues.entrySet()) {
-      if (TabletsSection.TabletColumnFamily.OLD_PREV_ROW_COLUMN.hasColumns(entry.getKey())) {
-        oldPrevEndRow = entry.getValue();
-      }
-    }
-
-    if (oldPrevEndRow != null) {
-      SortedMap<Text,SortedMap<ColumnFQ,Value>> tabletEntries;
-      tabletEntries = MetadataTableUtil.getTabletEntries(tabletsKeyValues, columnsToFetch);
-
-      KeyExtent fke = MasterMetadataUtil.fixSplit(context, metadataEntry,
-          tabletEntries.get(metadataEntry), lock);
-
-      if (!fke.equals(extent)) {
-        return new Pair<>(null, fke);
-      }
-
-      // reread and reverify metadata entries now that metadata entries were fixed
-      tabletsKeyValues.clear();
-      return verifyTabletInformation(context, fke, instance, tabletsKeyValues, clientAddress, lock);
-    }
-
-    return new Pair<>(new Text(dir.get()), null);
-  }
-
-  static Value checkTabletMetadata(KeyExtent extent, TServerInstance instance,
-      SortedMap<Key,Value> tabletsKeyValues, Text metadataEntry) throws AccumuloException {
-
-    TServerInstance future = null;
-    Value prevEndRow = null;
-    Value dir = null;
-    Value time = null;
-    for (Entry<Key,Value> entry : tabletsKeyValues.entrySet()) {
-      Key key = entry.getKey();
-      if (!metadataEntry.equals(key.getRow())) {
-        log.info("Unexpected row in tablet metadata {} {}", metadataEntry, key.getRow());
-        return null;
-      }
-      Text cf = key.getColumnFamily();
-      if (cf.equals(TabletsSection.FutureLocationColumnFamily.NAME)) {
-        if (future != null) {
-          throw new AccumuloException("Tablet has multiple future locations " + extent);
-        }
-        future = new TServerInstance(entry.getValue(), key.getColumnQualifier());
-      } else if (cf.equals(TabletsSection.CurrentLocationColumnFamily.NAME)) {
-        log.info("Tablet seems to be already assigned to {}",
-            new TServerInstance(entry.getValue(), key.getColumnQualifier()));
-        return null;
-      } else if (TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.hasColumns(key)) {
-        prevEndRow = entry.getValue();
-      } else if (TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.hasColumns(key)) {
-        dir = entry.getValue();
-      } else if (TabletsSection.ServerColumnFamily.TIME_COLUMN.hasColumns(key)) {
-        time = entry.getValue();
-      }
-    }
-
-    if (prevEndRow == null) {
-      throw new AccumuloException("Metadata entry does not have prev row (" + metadataEntry + ")");
-    } else {
-      KeyExtent ke2 = new KeyExtent(metadataEntry, prevEndRow);
-      if (!extent.equals(ke2)) {
-        log.info("Tablet prev end row mismatch {} {}", extent, ke2.getPrevEndRow());
-        return null;
-      }
-    }
-
-    if (dir == null) {
-      throw new AccumuloException("Metadata entry does not have directory (" + metadataEntry + ")");
-    }
-
-    if (time == null && !extent.equals(RootTable.OLD_EXTENT)) {
-      throw new AccumuloException("Metadata entry does not have time (" + metadataEntry + ")");
-    }
-
-    if (future == null) {
-      log.info("The master has not assigned {} to {}", extent, instance);
-      return null;
-    }
-
-    if (!instance.equals(future)) {
-      log.info("Table {} has been assigned to {} which is not {}", extent, future, instance);
-      return null;
-    }
-
-    return dir;
+    return true;
   }
 
   public String getClientAddressString() {
@@ -3179,14 +3048,6 @@
     Runnable gcDebugTask = () -> gcLogger.logGCInfo(getConfiguration());
 
     SimpleTimer.getInstance(aconf).schedule(gcDebugTask, 0, TIME_BETWEEN_GC_CHECKS);
-
-    Runnable constraintTask = () -> {
-      for (Tablet tablet : getOnlineTablets().values()) {
-        tablet.checkConstraints();
-      }
-    };
-
-    SimpleTimer.getInstance(aconf).schedule(constraintTask, 0, 1000);
   }
 
   public TabletServerStatus getStats(Map<TableId,MapCounter<ScanRunState>> scanCounts) {
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServerResourceManager.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServerResourceManager.java
index 7c8db69..b741cfe 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServerResourceManager.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServerResourceManager.java
@@ -90,7 +90,6 @@
 import com.google.common.cache.Cache;
 import com.google.common.cache.CacheBuilder;
 import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableMap.Builder;
 
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 
@@ -147,29 +146,25 @@
 
   private ExecutorService addEs(IntSupplier maxThreads, String name, final ThreadPoolExecutor tp) {
     ExecutorService result = addEs(name, tp);
-    SimpleTimer.getInstance(context.getConfiguration()).schedule(new Runnable() {
-      @Override
-      public void run() {
-        try {
-          int max = maxThreads.getAsInt();
-          int currentMax = tp.getMaximumPoolSize();
-          if (currentMax != max) {
-            log.info("Changing max threads for {} from {} to {}", name, currentMax, max);
-            if (max > currentMax) {
-              // increasing, increase the max first, or the core will fail to be increased
-              tp.setMaximumPoolSize(max);
-              tp.setCorePoolSize(max);
-            } else {
-              // decreasing, lower the core size first, or the max will fail to be lowered
-              tp.setCorePoolSize(max);
-              tp.setMaximumPoolSize(max);
-            }
+    SimpleTimer.getInstance(context.getConfiguration()).schedule(() -> {
+      try {
+        int max = maxThreads.getAsInt();
+        int currentMax = tp.getMaximumPoolSize();
+        if (currentMax != max) {
+          log.info("Changing max threads for {} from {} to {}", name, currentMax, max);
+          if (max > currentMax) {
+            // increasing, increase the max first, or the core will fail to be increased
+            tp.setMaximumPoolSize(max);
+            tp.setCorePoolSize(max);
+          } else {
+            // decreasing, lower the core size first, or the max will fail to be lowered
+            tp.setCorePoolSize(max);
+            tp.setMaximumPoolSize(max);
           }
-        } catch (Throwable t) {
-          log.error("Failed to change thread pool size", t);
         }
+      } catch (Throwable t) {
+        log.error("Failed to change thread pool size", t);
       }
-
     }, 1000, 10_000);
     return result;
   }
@@ -259,7 +254,7 @@
 
   protected Map<String,ExecutorService> createScanExecutors(
       Collection<ScanExecutorConfig> scanExecCfg, Map<String,Queue<?>> scanExecQueues) {
-    Builder<String,ExecutorService> builder = ImmutableMap.builder();
+    var builder = ImmutableMap.<String,ExecutorService>builder();
 
     for (ScanExecutorConfig sec : scanExecCfg) {
       builder.put(sec.name, createPriorityExecutor(sec, scanExecQueues));
@@ -322,7 +317,7 @@
 
   private Map<String,ScanExecutor> createScanExecutorChoices(
       Collection<ScanExecutorConfig> scanExecCfg, Map<String,Queue<?>> scanExecQueues) {
-    Builder<String,ScanExecutor> builder = ImmutableMap.builder();
+    var builder = ImmutableMap.<String,ScanExecutor>builder();
 
     for (ScanExecutorConfig sec : scanExecCfg) {
       builder.put(sec.name, new ScanExecutorImpl(sec, scanExecQueues.get(sec.name)));
@@ -555,23 +550,13 @@
       memUsageReports = new LinkedBlockingQueue<>();
       maxMem = context.getConfiguration().getAsBytes(Property.TSERV_MAXMEM);
 
-      Runnable r1 = new Runnable() {
-        @Override
-        public void run() {
-          processTabletMemStats();
-        }
-      };
+      Runnable r1 = () -> processTabletMemStats();
 
       memoryGuardThread = new Daemon(new LoggingRunnable(log, r1));
       memoryGuardThread.setPriority(Thread.NORM_PRIORITY + 1);
       memoryGuardThread.setName("Accumulo Memory Guard");
 
-      Runnable r2 = new Runnable() {
-        @Override
-        public void run() {
-          manageMemory();
-        }
-      };
+      Runnable r2 = () -> manageMemory();
 
       minorCompactionInitiatorThread = new Daemon(new LoggingRunnable(log, r2));
       minorCompactionInitiatorThread.setName("Accumulo Minor Compaction Initiator");
@@ -722,8 +707,9 @@
       synchronized (commitHold) {
         while (holdCommits) {
           try {
-            if (System.currentTimeMillis() > timeout)
+            if (System.currentTimeMillis() > timeout) {
               throw new HoldTimeoutException("Commits are held");
+            }
             commitHold.wait(1000);
           } catch (InterruptedException e) {}
         }
@@ -732,8 +718,9 @@
   }
 
   public long holdTime() {
-    if (!holdCommits)
+    if (!holdCommits) {
       return 0;
+    }
     synchronized (commitHold) {
       return System.currentTimeMillis() - holdStartTime;
     }
@@ -780,8 +767,9 @@
     }
 
     public synchronized ScanFileManager newScanFileManager() {
-      if (closed)
+      if (closed) {
         throw new IllegalStateException("closed");
+      }
       return fileManager.newScanFileManager(extent);
     }
 
@@ -817,13 +805,15 @@
       long currentTime = System.currentTimeMillis();
       if ((delta > 32000 || delta < 0 || (currentTime - lastReportedCommitTime > 1000))
           && lastReportedSize.compareAndSet(lrs, totalSize)) {
-        if (delta > 0)
+        if (delta > 0) {
           lastReportedCommitTime = currentTime;
+        }
         report = true;
       }
 
-      if (report)
+      if (report) {
         memMgmt.updateMemoryUsageStats(tablet, size, lastReportedCommitTime, mincSize);
+      }
     }
 
     // END methods that Tablets call to manage memory
@@ -833,13 +823,15 @@
     // to one map file
     public boolean needsMajorCompaction(SortedMap<FileRef,DataFileValue> tabletFiles,
         MajorCompactionReason reason) {
-      if (closed)
+      if (closed) {
         return false;// throw new IOException("closed");
+      }
 
       // int threshold;
 
-      if (reason == MajorCompactionReason.USER)
+      if (reason == MajorCompactionReason.USER) {
         return true;
+      }
 
       if (reason == MajorCompactionReason.IDLE) {
         // threshold = 1;
@@ -883,10 +875,12 @@
       // always obtain locks in same order to avoid deadlock
       synchronized (TabletServerResourceManager.this) {
         synchronized (this) {
-          if (closed)
+          if (closed) {
             throw new IOException("closed");
-          if (openFilesReserved)
+          }
+          if (openFilesReserved) {
             throw new IOException("tired to close files while open files reserved");
+          }
 
           memMgmt.tabletClosed(extent);
           memoryManager.tabletClosed(extent);
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/constraints/ConstraintChecker.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/constraints/ConstraintChecker.java
index 400f691..c7762b4 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/constraints/ConstraintChecker.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/constraints/ConstraintChecker.java
@@ -20,8 +20,8 @@
 import java.util.ArrayList;
 import java.util.List;
 import java.util.Map.Entry;
-import java.util.concurrent.atomic.AtomicLong;
 
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.constraints.Constraint;
 import org.apache.accumulo.core.constraints.Constraint.Environment;
@@ -41,43 +41,39 @@
   private ArrayList<Constraint> constrains;
   private static final Logger log = LoggerFactory.getLogger(ConstraintChecker.class);
 
-  private ClassLoader loader;
-  private TableConfiguration conf;
-
-  private AtomicLong lastCheck = new AtomicLong(0);
-
-  public ConstraintChecker(TableConfiguration conf) {
+  public ConstraintChecker(AccumuloConfiguration conf) {
     constrains = new ArrayList<>();
 
-    this.conf = conf;
-
     try {
       String context = conf.get(Property.TABLE_CLASSPATH);
 
+      ClassLoader loader;
+
       if (context != null && !context.equals("")) {
         loader = AccumuloVFSClassLoader.getContextManager().getClassLoader(context);
       } else {
         loader = AccumuloVFSClassLoader.getClassLoader();
       }
 
-      for (Entry<String,String> entry : conf) {
+      for (Entry<String,String> entry : conf
+          .getAllPropertiesWithPrefix(Property.TABLE_CONSTRAINT_PREFIX).entrySet()) {
         if (entry.getKey().startsWith(Property.TABLE_CONSTRAINT_PREFIX.getKey())) {
           String className = entry.getValue();
           Class<? extends Constraint> clazz =
               loader.loadClass(className).asSubclass(Constraint.class);
-          log.debug("Loaded constraint {} for {}", clazz.getName(), conf.getTableId());
-          constrains.add(clazz.newInstance());
+
+          log.debug("Loaded constraint {} for {}", clazz.getName(),
+              ((TableConfiguration) conf).getTableId());
+          constrains.add(clazz.getDeclaredConstructor().newInstance());
         }
       }
 
-      lastCheck.set(System.currentTimeMillis());
-
     } catch (Throwable e) {
       constrains.clear();
-      loader = null;
       constrains.add(new UnsatisfiableConstraint((short) -1,
           "Failed to load constraints, not accepting mutations."));
-      log.error("Failed to load constraints " + conf.getTableId() + " " + e, e);
+      log.error("Failed to load constraints " + ((TableConfiguration) conf).getTableId() + " " + e,
+          e);
     }
   }
 
@@ -86,29 +82,6 @@
     return constrains;
   }
 
-  public boolean classLoaderChanged() {
-
-    if (constrains.size() == 0)
-      return false;
-
-    try {
-      String context = conf.get(Property.TABLE_CLASSPATH);
-
-      ClassLoader currentLoader;
-
-      if (context != null && !context.equals("")) {
-        currentLoader = AccumuloVFSClassLoader.getContextManager().getClassLoader(context);
-      } else {
-        currentLoader = AccumuloVFSClassLoader.getClassLoader();
-      }
-
-      return currentLoader != loader;
-    } catch (Exception e) {
-      log.debug("Failed to check {}", e.getMessage());
-      return true;
-    }
-  }
-
   private static Violations addViolation(Violations violations, ConstraintViolationSummary cvs) {
     if (violations == null) {
       violations = new Violations();
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/logger/LogReader.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/logger/LogReader.java
index 2f38fd9..2149d85 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/logger/LogReader.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/logger/LogReader.java
@@ -76,7 +76,7 @@
   public static void main(String[] args) throws IOException {
     Opts opts = new Opts();
     opts.parseArgs(LogReader.class.getName(), args);
-    SiteConfiguration siteConfig = new SiteConfiguration();
+    var siteConfig = SiteConfiguration.auto();
     VolumeManager fs = VolumeManagerImpl.get(siteConfig, new Configuration());
 
     Matcher rowMatcher = null;
@@ -86,8 +86,9 @@
       new JCommander(opts).usage();
       return;
     }
-    if (opts.row != null)
+    if (opts.row != null) {
       row = new Text(opts.row);
+    }
     if (opts.extent != null) {
       String[] sa = opts.extent.split(";");
       ke = new KeyExtent(TableId.of(sa[0]), new Text(sa[1]), new Text(sa[2]));
@@ -174,8 +175,9 @@
           }
         }
 
-        if (!found)
+        if (!found) {
           return;
+        }
       } else {
         return;
       }
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/AccumuloReplicaSystem.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/AccumuloReplicaSystem.java
index 6ea2bcf..2e95291 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/AccumuloReplicaSystem.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/AccumuloReplicaSystem.java
@@ -71,7 +71,6 @@
 import org.apache.accumulo.tserver.log.DfsLogger.LogHeaderIncompleteException;
 import org.apache.accumulo.tserver.logger.LogFileKey;
 import org.apache.accumulo.tserver.logger.LogFileValue;
-import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -720,7 +719,7 @@
 
     // Add our name, and send it
     final String name = conf.get(Property.REPLICATION_NAME);
-    if (StringUtils.isBlank(name)) {
+    if (name.isBlank()) {
       throw new IllegalArgumentException("Local system has no replication name configured");
     }
 
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/ReplicationServicerHandler.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/ReplicationServicerHandler.java
index cb1cfb3..3b7d13c 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/ReplicationServicerHandler.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/ReplicationServicerHandler.java
@@ -94,8 +94,8 @@
     // Create an instance
     AccumuloReplicationReplayer replayer;
     try {
-      replayer = clz.newInstance();
-    } catch (InstantiationException | IllegalAccessException e1) {
+      replayer = clz.getDeclaredConstructor().newInstance();
+    } catch (ReflectiveOperationException e1) {
       log.error("Could not instantiate replayer class {}", clz.getName());
       throw new RemoteReplicationException(RemoteReplicationErrorCode.CANNOT_INSTANTIATE_REPLAYER,
           "Could not instantiate replayer class" + clz.getName());
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/BulkImportCacheCleaner.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/BulkImportCacheCleaner.java
index c908840..03f33e2 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/BulkImportCacheCleaner.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/BulkImportCacheCleaner.java
@@ -40,7 +40,7 @@
     // gather the list of transactions the tablets have cached
     final Set<Long> tids = new HashSet<>();
     for (Tablet tablet : server.getOnlineTablets().values()) {
-      tids.addAll(tablet.getBulkIngestedFiles().keySet());
+      tids.addAll(tablet.getBulkIngestedTxIds());
     }
     try {
       // get the current transactions from ZooKeeper
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/DatafileManager.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/DatafileManager.java
index 058817d..1ab4c20 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/DatafileManager.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/DatafileManager.java
@@ -38,7 +38,6 @@
 import org.apache.accumulo.core.replication.ReplicationConfigurationUtil;
 import org.apache.accumulo.core.util.MapCounter;
 import org.apache.accumulo.core.util.Pair;
-import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.server.ServerConstants;
 import org.apache.accumulo.server.fs.FileRef;
 import org.apache.accumulo.server.fs.VolumeManager;
@@ -342,17 +341,6 @@
   void bringMinorCompactionOnline(FileRef tmpDatafile, FileRef newDatafile, FileRef absMergeFile,
       DataFileValue dfv, CommitSession commitSession, long flushId) {
 
-    IZooReaderWriter zoo = tablet.getContext().getZooReaderWriter();
-    if (tablet.getExtent().isRootTablet()) {
-      try {
-        if (!zoo.isLockHeld(tablet.getTabletServer().getLock().getLockID())) {
-          throw new IllegalStateException();
-        }
-      } catch (Exception e) {
-        throw new IllegalStateException("Can not bring major compaction online, lock not held", e);
-      }
-    }
-
     // rename before putting in metadata table, so files in metadata table should
     // always exist
     do {
@@ -521,20 +509,17 @@
     final KeyExtent extent = tablet.getExtent();
     long t1, t2;
 
-    if (!extent.isRootTablet()) {
+    if (tablet.getTabletServer().getFileSystem().exists(newDatafile.path())) {
+      log.error("Target map file already exist " + newDatafile, new Exception());
+      throw new IllegalStateException("Target map file already exist " + newDatafile);
+    }
 
-      if (tablet.getTabletServer().getFileSystem().exists(newDatafile.path())) {
-        log.error("Target map file already exist " + newDatafile, new Exception());
-        throw new IllegalStateException("Target map file already exist " + newDatafile);
-      }
+    // rename before putting in metadata table, so files in metadata table should
+    // always exist
+    rename(tablet.getTabletServer().getFileSystem(), tmpDatafile.path(), newDatafile.path());
 
-      // rename before putting in metadata table, so files in metadata table should
-      // always exist
-      rename(tablet.getTabletServer().getFileSystem(), tmpDatafile.path(), newDatafile.path());
-
-      if (dfv.getNumEntries() == 0) {
-        tablet.getTabletServer().getFileSystem().deleteRecursively(newDatafile.path());
-      }
+    if (dfv.getNumEntries() == 0) {
+      tablet.getTabletServer().getFileSystem().deleteRecursively(newDatafile.path());
     }
 
     TServerInstance lastLocation = null;
@@ -542,33 +527,8 @@
 
       t1 = System.currentTimeMillis();
 
-      IZooReaderWriter zoo = tablet.getContext().getZooReaderWriter();
-
       tablet.incrementDataSourceDeletions();
 
-      if (extent.isRootTablet()) {
-
-        waitForScansToFinish(oldDatafiles, true, Long.MAX_VALUE);
-
-        try {
-          if (!zoo.isLockHeld(tablet.getTabletServer().getLock().getLockID())) {
-            throw new IllegalStateException();
-          }
-        } catch (Exception e) {
-          throw new IllegalStateException("Can not bring major compaction online, lock not held",
-              e);
-        }
-
-        // mark files as ready for deletion, but
-        // do not delete them until we successfully
-        // rename the compacted map file, in case
-        // the system goes down
-
-        RootFiles.replaceFiles(tablet.getTableConfiguration(),
-            tablet.getTabletServer().getFileSystem(), tablet.getLocation(), oldDatafiles,
-            tmpDatafile, newDatafile);
-      }
-
       // atomically remove old files and add new file
       for (FileRef oldDatafile : oldDatafiles) {
         if (!datafileSizes.containsKey(oldDatafile)) {
@@ -597,16 +557,14 @@
       t2 = System.currentTimeMillis();
     }
 
-    if (!extent.isRootTablet()) {
-      Set<FileRef> filesInUseByScans = waitForScansToFinish(oldDatafiles, false, 10000);
-      if (filesInUseByScans.size() > 0)
-        log.debug("Adding scan refs to metadata {} {}", extent, filesInUseByScans);
-      MasterMetadataUtil.replaceDatafiles(tablet.getContext(), extent, oldDatafiles,
-          filesInUseByScans, newDatafile, compactionId, dfv,
-          tablet.getTabletServer().getClientAddressString(), lastLocation,
-          tablet.getTabletServer().getLock());
-      removeFilesAfterScan(filesInUseByScans);
-    }
+    Set<FileRef> filesInUseByScans = waitForScansToFinish(oldDatafiles, false, 10000);
+    if (filesInUseByScans.size() > 0)
+      log.debug("Adding scan refs to metadata {} {}", extent, filesInUseByScans);
+    MasterMetadataUtil.replaceDatafiles(tablet.getContext(), extent, oldDatafiles,
+        filesInUseByScans, newDatafile, compactionId, dfv,
+        tablet.getTabletServer().getClientAddressString(), lastLocation,
+        tablet.getTabletServer().getLock());
+    removeFilesAfterScan(filesInUseByScans);
 
     log.debug(String.format("MajC finish lock %.2f secs", (t2 - t1) / 1000.0));
     log.debug("TABLET_HIST {} MajC  --> {}", oldDatafiles, newDatafile);
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/PreparedMutations.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/PreparedMutations.java
new file mode 100644
index 0000000..49453f7
--- /dev/null
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/PreparedMutations.java
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.tserver.tablet;
+
+import java.util.Collection;
+import java.util.List;
+import java.util.Objects;
+import java.util.Set;
+
+import org.apache.accumulo.core.constraints.Violations;
+import org.apache.accumulo.core.data.Mutation;
+
+import com.google.common.base.Preconditions;
+
+public class PreparedMutations {
+
+  private final Violations violations;
+  private final Collection<Mutation> violators;
+  private final List<Mutation> nonViolators;
+  private final CommitSession commitSession;
+  private final boolean tabletClosed;
+
+  /**
+   * This constructor is used to communicate the tablet was closed.
+   */
+  public PreparedMutations() {
+    this.tabletClosed = true;
+    this.violations = null;
+    this.violators = null;
+    this.nonViolators = null;
+    this.commitSession = null;
+  }
+
+  public PreparedMutations(CommitSession cs, List<Mutation> nonViolators, Violations violations,
+      Set<Mutation> violators) {
+    this.tabletClosed = false;
+    this.nonViolators = Objects.requireNonNull(nonViolators);
+    this.violators = Objects.requireNonNull(violators);
+    this.violations = Objects.requireNonNull(violations);
+    if (cs == null)
+      Preconditions.checkArgument(nonViolators.isEmpty());
+    this.commitSession = cs;
+  }
+
+  /**
+   * Return true if the tablet was closed. When this is the case no other methods can be called.
+   */
+  public boolean tabletClosed() {
+    return tabletClosed;
+  }
+
+  /**
+   * Retrieve the commit session. May be null.
+   */
+  public CommitSession getCommitSession() {
+    Preconditions.checkState(!tabletClosed);
+    return commitSession;
+  }
+
+  /**
+   * Retrieve the constraint violations found across the mutations.
+   */
+  public Violations getViolations() {
+    Preconditions.checkState(!tabletClosed);
+    return violations;
+  }
+
+  /**
+   * Return the list of mutations that violated a constraint.
+   */
+  public Collection<Mutation> getViolators() {
+    Preconditions.checkState(!tabletClosed);
+    return violators;
+  }
+
+  /**
+   * Return the list of mutations that did not violate any constraints.
+   */
+  public List<Mutation> getNonViolators() {
+    Preconditions.checkState(!tabletClosed);
+    return nonViolators;
+  }
+}
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/RootFiles.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/RootFiles.java
deleted file mode 100644
index 367ce21..0000000
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/RootFiles.java
+++ /dev/null
@@ -1,139 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.tserver.tablet;
-
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Set;
-
-import org.apache.accumulo.core.Constants;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.file.FileOperations;
-import org.apache.accumulo.server.fs.FileRef;
-import org.apache.accumulo.server.fs.VolumeManager;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.Path;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class RootFiles {
-
-  private static final Logger log = LoggerFactory.getLogger(RootFiles.class);
-
-  public static void prepareReplacement(VolumeManager fs, Path location, Set<FileRef> oldDatafiles,
-      String compactName) throws IOException {
-    for (FileRef ref : oldDatafiles) {
-      Path path = ref.path();
-      DatafileManager.rename(fs, path,
-          new Path(location + "/delete+" + compactName + "+" + path.getName()));
-    }
-  }
-
-  public static void renameReplacement(VolumeManager fs, FileRef tmpDatafile, FileRef newDatafile)
-      throws IOException {
-    if (fs.exists(newDatafile.path())) {
-      log.error("Target map file already exist " + newDatafile, new Exception());
-      throw new IllegalStateException("Target map file already exist " + newDatafile);
-    }
-
-    DatafileManager.rename(fs, tmpDatafile.path(), newDatafile.path());
-  }
-
-  public static void finishReplacement(AccumuloConfiguration acuTableConf, VolumeManager fs,
-      Path location, Set<FileRef> oldDatafiles, String compactName) throws IOException {
-    // start deleting files, if we do not finish they will be cleaned
-    // up later
-    for (FileRef ref : oldDatafiles) {
-      Path path = ref.path();
-      Path deleteFile = new Path(location + "/delete+" + compactName + "+" + path.getName());
-      if (acuTableConf.getBoolean(Property.GC_TRASH_IGNORE) || !fs.moveToTrash(deleteFile))
-        fs.deleteRecursively(deleteFile);
-    }
-  }
-
-  public static void replaceFiles(AccumuloConfiguration acuTableConf, VolumeManager fs,
-      Path location, Set<FileRef> oldDatafiles, FileRef tmpDatafile, FileRef newDatafile)
-      throws IOException {
-    String compactName = newDatafile.path().getName();
-
-    prepareReplacement(fs, location, oldDatafiles, compactName);
-    renameReplacement(fs, tmpDatafile, newDatafile);
-    finishReplacement(acuTableConf, fs, location, oldDatafiles, compactName);
-  }
-
-  public static Collection<String> cleanupReplacement(VolumeManager fs, FileStatus[] files,
-      boolean deleteTmp) throws IOException {
-    /*
-     * called in constructor and before major compactions
-     */
-    Collection<String> goodFiles = new ArrayList<>(files.length);
-
-    for (FileStatus file : files) {
-
-      String path = file.getPath().toString();
-      if (file.getPath().toUri().getScheme() == null) {
-        // depending on the behavior of HDFS, if list status does not return fully qualified volumes
-        // then could switch to the default volume
-        throw new IllegalArgumentException("Require fully qualified paths " + file.getPath());
-      }
-
-      String filename = file.getPath().getName();
-
-      // check for incomplete major compaction, this should only occur
-      // for root tablet
-      if (filename.startsWith("delete+")) {
-        String expectedCompactedFile =
-            path.substring(0, path.lastIndexOf("/delete+")) + "/" + filename.split("\\+")[1];
-        if (fs.exists(new Path(expectedCompactedFile))) {
-          // compaction finished, but did not finish deleting compacted files.. so delete it
-          if (!fs.deleteRecursively(file.getPath()))
-            log.warn("Delete of file: {} return false", file.getPath());
-          continue;
-        }
-        // compaction did not finish, so put files back
-
-        // reset path and filename for rest of loop
-        filename = filename.split("\\+", 3)[2];
-        path = path.substring(0, path.lastIndexOf("/delete+")) + "/" + filename;
-
-        DatafileManager.rename(fs, file.getPath(), new Path(path));
-      }
-
-      if (filename.endsWith("_tmp")) {
-        if (deleteTmp) {
-          log.warn("cleaning up old tmp file: {}", path);
-          if (!fs.deleteRecursively(file.getPath()))
-            log.warn("Delete of tmp file: {} return false", file.getPath());
-
-        }
-        continue;
-      }
-
-      if (!filename.startsWith(Constants.MAPFILE_EXTENSION + "_")
-          && !FileOperations.getValidExtensions().contains(filename.split("\\.")[1])) {
-        log.error("unknown file in tablet: {}", path);
-        continue;
-      }
-
-      goodFiles.add(path);
-    }
-
-    return goodFiles;
-  }
-}
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Tablet.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Tablet.java
index 981d82b..1e89624 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Tablet.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Tablet.java
@@ -38,11 +38,10 @@
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
-import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicLong;
-import java.util.concurrent.atomic.AtomicReference;
 import java.util.concurrent.locks.ReentrantLock;
 
 import org.apache.accumulo.core.Constants;
@@ -53,8 +52,8 @@
 import org.apache.accumulo.core.clientImpl.DurabilityImpl;
 import org.apache.accumulo.core.clientImpl.Tables;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.AccumuloConfiguration.Deriver;
 import org.apache.accumulo.core.conf.ConfigurationCopy;
-import org.apache.accumulo.core.conf.ConfigurationObserver;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.constraints.Violations;
 import org.apache.accumulo.core.data.ByteSequence;
@@ -78,8 +77,8 @@
 import org.apache.accumulo.core.master.thrift.BulkImportState;
 import org.apache.accumulo.core.master.thrift.TabletLoadState;
 import org.apache.accumulo.core.metadata.MetadataTable;
-import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
+import org.apache.accumulo.core.metadata.schema.MetadataTime;
 import org.apache.accumulo.core.protobuf.ProtobufUtil;
 import org.apache.accumulo.core.replication.ReplicationConfigurationUtil;
 import org.apache.accumulo.core.security.Authorizations;
@@ -119,7 +118,6 @@
 import org.apache.accumulo.tserver.ConditionCheckerContext.ConditionChecker;
 import org.apache.accumulo.tserver.InMemoryMap;
 import org.apache.accumulo.tserver.MinorCompactionReason;
-import org.apache.accumulo.tserver.TConstraintViolationException;
 import org.apache.accumulo.tserver.TabletServer;
 import org.apache.accumulo.tserver.TabletServerResourceManager.TabletResourceManager;
 import org.apache.accumulo.tserver.TabletStatsKeeper;
@@ -152,12 +150,9 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 import com.google.common.cache.Cache;
-import com.google.common.cache.CacheBuilder;
 import com.google.common.collect.ImmutableSet;
-import com.google.common.collect.ImmutableSet.Builder;
 
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 
@@ -167,6 +162,8 @@
 public class Tablet {
   private static final Logger log = LoggerFactory.getLogger(Tablet.class);
 
+  private static final byte[] EMPTY_BYTES = new byte[0];
+
   private final TabletServer tabletServer;
   private final ServerContext context;
   private final KeyExtent extent;
@@ -223,7 +220,7 @@
   private final Set<MajorCompactionReason> majorCompactionQueued =
       Collections.synchronizedSet(EnumSet.noneOf(MajorCompactionReason.class));
 
-  private final AtomicReference<ConstraintChecker> constraintChecker = new AtomicReference<>();
+  private final Deriver<ConstraintChecker> constraintChecker;
 
   private int writesInProgress = 0;
 
@@ -241,7 +238,7 @@
   private final Rate ingestByteRate = new Rate(0.95);
   private long ingestBytes = 0;
 
-  private byte[] defaultSecurityLabel = new byte[0];
+  private final Deriver<byte[]> defaultSecurityLabel;
 
   private long lastMinorCompactionFinishTime = 0;
   private long lastMapFileImportTime = 0;
@@ -252,14 +249,17 @@
   private final Rate scannedRate = new Rate(0.95);
   private final AtomicLong scannedCount = new AtomicLong(0);
 
-  private final ConfigurationObserver configObserver;
-
   // Files that are currently in the process of bulk importing. Access to this is protected by the
   // tablet lock.
   private final Set<FileRef> bulkImporting = new HashSet<>();
 
-  // Files that were successfully bulk imported.
-  private final Cache<Long,List<FileRef>> bulkImported = CacheBuilder.newBuilder().build();
+  // Files that were successfully bulk imported. Using a concurrent map supports non-locking
+  // operations on the key set which is useful for the periodic task that cleans up completed bulk
+  // imports for all tablets. However the values of this map are ArrayList which do not support
+  // concurrency. This is ok because all operations on the values are done while the tablet lock is
+  // held.
+  private final ConcurrentHashMap<Long,List<FileRef>> bulkImported =
+      new ConcurrentHashMap<Long,List<FileRef>>();
 
   private final int logId;
 
@@ -303,32 +303,9 @@
     }
   }
 
-  /**
-   * Only visible for testing
-   */
-  @VisibleForTesting
-  protected Tablet(TabletTime tabletTime, String tabletDirectory, int logId, Path location,
-      DatafileManager datafileManager, TabletServer tabletServer,
-      TabletResourceManager tabletResources, TabletMemory tabletMemory,
-      TableConfiguration tableConfiguration, KeyExtent extent,
-      ConfigurationObserver configObserver) {
-    this.tabletTime = tabletTime;
-    this.tabletDirectory = tabletDirectory;
-    this.logId = logId;
-    this.location = location;
-    this.datafileManager = datafileManager;
-    this.tabletServer = tabletServer;
-    this.context = tabletServer.getContext();
-    this.tabletResources = tabletResources;
-    this.tabletMemory = tabletMemory;
-    this.tableConfiguration = tableConfiguration;
-    this.extent = extent;
-    this.configObserver = configObserver;
-    this.splitCreationTime = 0;
-  }
-
   public Tablet(final TabletServer tabletServer, final KeyExtent extent,
-      final TabletResourceManager trm, TabletData data) throws IOException {
+      final TabletResourceManager trm, TabletData data)
+      throws IOException, IllegalArgumentException {
 
     this.tabletServer = tabletServer;
     this.context = tabletServer.getContext();
@@ -371,58 +348,25 @@
     this.location = locationPath;
     this.tabletDirectory = tabletPaths.dir;
     for (Entry<Long,List<FileRef>> entry : data.getBulkImported().entrySet()) {
-      this.bulkImported.put(entry.getKey(), new CopyOnWriteArrayList<>(entry.getValue()));
+      this.bulkImported.put(entry.getKey(), new ArrayList<>(entry.getValue()));
     }
-    setupDefaultSecurityLabels(extent);
 
     final List<LogEntry> logEntries = tabletPaths.logEntries;
     final SortedMap<FileRef,DataFileValue> datafiles = tabletPaths.datafiles;
 
-    tableConfiguration.addObserver(configObserver = new ConfigurationObserver() {
+    constraintChecker = tableConfiguration.newDeriver(conf -> new ConstraintChecker(conf));
 
-      private void reloadConstraints() {
-        log.debug("Reloading constraints for extent: " + extent);
-        constraintChecker.set(new ConstraintChecker(tableConfiguration));
-      }
+    if (extent.isMeta()) {
+      defaultSecurityLabel = () -> EMPTY_BYTES;
+    } else {
+      defaultSecurityLabel = tableConfiguration.newDeriver(conf -> {
+        return new ColumnVisibility(conf.get(Property.TABLE_DEFAULT_SCANTIME_VISIBILITY))
+            .getExpression();
+      });
+    }
 
-      @Override
-      public void propertiesChanged() {
-        reloadConstraints();
-
-        try {
-          setupDefaultSecurityLabels(extent);
-        } catch (Exception e) {
-          log.error("Failed to reload default security labels for extent: {}", extent);
-        }
-      }
-
-      @Override
-      public void propertyChanged(String prop) {
-        if (prop.startsWith(Property.TABLE_CONSTRAINT_PREFIX.getKey())) {
-          reloadConstraints();
-        } else if (prop.equals(Property.TABLE_DEFAULT_SCANTIME_VISIBILITY.getKey())) {
-          try {
-            log.info("Default security labels changed for extent: {}", extent);
-            setupDefaultSecurityLabels(extent);
-          } catch (Exception e) {
-            log.error("Failed to reload default security labels for extent: {}", extent);
-          }
-        }
-
-      }
-
-      @Override
-      public void sessionExpired() {
-        log.trace("Session expired, no longer updating per table props...");
-      }
-
-    });
-
-    tableConfiguration.getNamespaceConfiguration().addObserver(configObserver);
     tabletMemory = new TabletMemory(this);
 
-    // Force a load of any per-table properties
-    configObserver.propertiesChanged();
     if (!logEntries.isEmpty()) {
       log.info("Starting Write-Ahead Log recovery for {}", this.extent);
       final AtomicLong entriesUsedOnTablet = new AtomicLong(0);
@@ -557,21 +501,6 @@
     }
   }
 
-  private void setupDefaultSecurityLabels(KeyExtent extent) {
-    if (extent.isMeta()) {
-      defaultSecurityLabel = new byte[0];
-    } else {
-      try {
-        ColumnVisibility cv = new ColumnVisibility(
-            tableConfiguration.get(Property.TABLE_DEFAULT_SCANTIME_VISIBILITY));
-        this.defaultSecurityLabel = cv.getExpression();
-      } catch (Exception e) {
-        log.error("Error setting up default security label {}", e.getMessage(), e);
-        this.defaultSecurityLabel = new byte[0];
-      }
-    }
-  }
-
   private LookupResult lookup(SortedKeyValueIterator<Key,Value> mmfi, List<Range> ranges,
       HashSet<Column> columnSet, List<KVEntry> results, long maxResultsSize, long batchTimeOut)
       throws IOException {
@@ -723,7 +652,7 @@
       AtomicBoolean iFlag) throws IOException {
 
     ScanDataSource dataSource =
-        new ScanDataSource(this, authorizations, this.defaultSecurityLabel, iFlag);
+        new ScanDataSource(this, authorizations, this.defaultSecurityLabel.derive(), iFlag);
 
     try {
       SortedKeyValueIterator<Key,Value> iter = new SourceSwitchingIterator(dataSource);
@@ -760,8 +689,9 @@
       tabletRange.clip(range);
     }
 
-    ScanDataSource dataSource = new ScanDataSource(this, authorizations, this.defaultSecurityLabel,
-        columns, ssiList, ssio, interruptFlag, samplerConfig, batchTimeOut, classLoaderContext);
+    ScanDataSource dataSource =
+        new ScanDataSource(this, authorizations, this.defaultSecurityLabel.derive(), columns,
+            ssiList, ssio, interruptFlag, samplerConfig, batchTimeOut, classLoaderContext);
 
     LookupResult result = null;
 
@@ -881,8 +811,9 @@
     // then clip will throw an exception
     extent.toDataRange().clip(range);
 
-    ScanOptions opts = new ScanOptions(num, authorizations, this.defaultSecurityLabel, columns,
-        ssiList, ssio, interruptFlag, isolated, samplerConfig, batchTimeOut, classLoaderContext);
+    ScanOptions opts =
+        new ScanOptions(num, authorizations, this.defaultSecurityLabel.derive(), columns, ssiList,
+            ssio, interruptFlag, isolated, samplerConfig, batchTimeOut, classLoaderContext);
     return new Scanner(this, range, opts);
   }
 
@@ -1188,62 +1119,60 @@
     return commitSession;
   }
 
-  public void checkConstraints() {
-    ConstraintChecker cc = constraintChecker.get();
-
-    if (cc.classLoaderChanged()) {
-      ConstraintChecker ncc = new ConstraintChecker(tableConfiguration);
-      constraintChecker.compareAndSet(cc, ncc);
-    }
-  }
-
-  public CommitSession prepareMutationsForCommit(TservConstraintEnv cenv, List<Mutation> mutations)
-      throws TConstraintViolationException {
-
-    ConstraintChecker cc = constraintChecker.get();
-
-    List<Mutation> violators = null;
-    Violations violations = new Violations();
+  public PreparedMutations prepareMutationsForCommit(final TservConstraintEnv cenv,
+      final List<Mutation> mutations) {
     cenv.setExtent(extent);
+    final ConstraintChecker constraints = constraintChecker.derive();
+
+    // Check each mutation for any constraint violations.
+    Violations violations = null;
+    Set<Mutation> violators = null;
+    List<Mutation> nonViolators = null;
+
     for (Mutation mutation : mutations) {
-      Violations more = cc.check(cenv, mutation);
-      if (more != null) {
-        violations.add(more);
-        if (violators == null) {
-          violators = new ArrayList<>();
+      Violations mutationViolations = constraints.check(cenv, mutation);
+      if (mutationViolations != null) {
+        if (violations == null) {
+          violations = new Violations();
+          violators = new HashSet<>();
         }
+
+        violations.add(mutationViolations);
         violators.add(mutation);
       }
     }
 
-    long time = tabletTime.setUpdateTimes(mutations);
-
-    if (!violations.isEmpty()) {
-
-      HashSet<Mutation> violatorsSet = new HashSet<>(violators);
-      ArrayList<Mutation> nonViolators = new ArrayList<>();
-
+    if (violations == null) {
+      // If there are no violations, use the original list for non-violators.
+      nonViolators = mutations;
+      violators = Collections.emptySet();
+      violations = Violations.EMPTY;
+    } else if (violators.size() != mutations.size()) {
+      // Otherwise, find all non-violators.
+      nonViolators = new ArrayList<>((mutations.size() - violators.size()));
       for (Mutation mutation : mutations) {
-        if (!violatorsSet.contains(mutation)) {
+        if (!violators.contains(mutation)) {
           nonViolators.add(mutation);
         }
       }
-
-      CommitSession commitSession = null;
-
-      if (nonViolators.size() > 0) {
-        // if everything is a violation, then it is expected that
-        // code calling this will not log or commit
-        commitSession = finishPreparingMutations(time);
-        if (commitSession == null) {
-          return null;
-        }
-      }
-
-      throw new TConstraintViolationException(violations, violators, nonViolators, commitSession);
+    } else {
+      // all mutations violated a constraint
+      nonViolators = Collections.emptyList();
     }
 
-    return finishPreparingMutations(time);
+    // If there are any mutations that do not violate the constraints, attempt to prepare the tablet
+    // and retrieve the commit session.
+    CommitSession cs = null;
+    if (!nonViolators.isEmpty()) {
+      long time = tabletTime.setUpdateTimes(nonViolators);
+      cs = finishPreparingMutations(time);
+      if (cs == null) {
+        // tablet is closed
+        return new PreparedMutations();
+      }
+    }
+
+    return new PreparedMutations(cs, nonViolators, violations, violators);
   }
 
   private synchronized void incrementWritesInProgress(CommitSession cs) {
@@ -1470,9 +1399,6 @@
 
     log.debug("TABLET_HIST {} closed", extent);
 
-    tableConfiguration.getNamespaceConfiguration().removeObserver(configObserver);
-    tableConfiguration.removeObserver(configObserver);
-
     if (completeClose) {
       closeState = CloseState.COMPLETE;
     }
@@ -1504,25 +1430,12 @@
         throw new RuntimeException(msg);
       }
 
-      if (extent.isRootTablet()) {
-        if (!fileLog.getSecond().keySet()
-            .equals(getDatafileManager().getDatafileSizes().keySet())) {
-          String msg = "Data file in " + RootTable.NAME + " differ from in memory data " + extent
-              + "  " + fileLog.getSecond().keySet() + "  "
-              + getDatafileManager().getDatafileSizes().keySet();
-          log.error(msg);
-          throw new RuntimeException(msg);
-        }
-      } else {
-        if (!fileLog.getSecond().equals(getDatafileManager().getDatafileSizes())) {
-          String msg =
-              "Data file in " + MetadataTable.NAME + " differ from in memory data " + extent + "  "
-                  + fileLog.getSecond() + "  " + getDatafileManager().getDatafileSizes();
-          log.error(msg);
-          throw new RuntimeException(msg);
-        }
+      if (!fileLog.getSecond().equals(getDatafileManager().getDatafileSizes())) {
+        String msg = "Data files in differ from in memory data " + extent + "  "
+            + fileLog.getSecond() + "  " + getDatafileManager().getDatafileSizes();
+        log.error(msg);
+        throw new RuntimeException(msg);
       }
-
     } catch (Exception e) {
       String msg = "Failed to do close consistency check for tablet " + extent;
       log.error(msg, e);
@@ -1874,14 +1787,6 @@
       majorCompactionState = CompactionState.IN_PROGRESS;
       notifyAll();
 
-      VolumeManager fs = getTabletServer().getFileSystem();
-      if (extent.isRootTablet()) {
-        // very important that we call this before doing major compaction,
-        // otherwise deleted compacted files could possible be brought back
-        // at some point if the file they were compacted to was legitimately
-        // removed by a major compaction
-        RootFiles.cleanupReplacement(fs, fs.listStatus(this.location), false);
-      }
       SortedMap<FileRef,DataFileValue> allFiles = getDatafileManager().getDatafileSizes();
       List<FileRef> inputFiles = new ArrayList<>();
       if (reason == MajorCompactionReason.CHOP) {
@@ -1992,7 +1897,7 @@
             getNextMapFilename((filesToCompact.size() == 0 && !propogateDeletes) ? "A" : "C");
         FileRef compactTmpName = new FileRef(fileName.path() + "_tmp");
 
-        AccumuloConfiguration tableConf = createTableConfiguration(tableConfiguration, plan);
+        AccumuloConfiguration tableConf = createCompactionConfiguration(tableConfiguration, plan);
 
         try (TraceScope span = Trace.startSpan("compactFiles")) {
           CompactionEnv cenv = new CompactionEnv() {
@@ -2069,7 +1974,7 @@
     }
   }
 
-  protected AccumuloConfiguration createTableConfiguration(TableConfiguration base,
+  protected static AccumuloConfiguration createCompactionConfiguration(TableConfiguration base,
       CompactionPlan plan) {
     if (plan == null || plan.writeParameters == null) {
       return base;
@@ -2340,22 +2245,22 @@
       log.debug("Files for low split {} {}", low, lowDatafileSizes.keySet());
       log.debug("Files for high split {} {}", high, highDatafileSizes.keySet());
 
-      String time = tabletTime.getMetadataValue();
+      MetadataTime time = tabletTime.getMetadataTime();
 
       MetadataTableUtil.splitTablet(high, extent.getPrevEndRow(), splitRatio,
           getTabletServer().getContext(), getTabletServer().getLock());
       MasterMetadataUtil.addNewTablet(getTabletServer().getContext(), low, lowDirectory,
-          getTabletServer().getTabletSession(), lowDatafileSizes, getBulkIngestedFiles(), time,
-          lastFlushID, lastCompactID, getTabletServer().getLock());
+          getTabletServer().getTabletSession(), lowDatafileSizes, bulkImported, time, lastFlushID,
+          lastCompactID, getTabletServer().getLock());
       MetadataTableUtil.finishSplit(high, highDatafileSizes, highDatafilesToRemove,
           getTabletServer().getContext(), getTabletServer().getLock());
 
       log.debug("TABLET_HIST {} split {} {}", extent, low, high);
 
       newTablets.put(high, new TabletData(tabletDirectory, highDatafileSizes, time, lastFlushID,
-          lastCompactID, lastLocation, getBulkIngestedFiles()));
+          lastCompactID, lastLocation, bulkImported));
       newTablets.put(low, new TabletData(lowDirectory, lowDatafileSizes, time, lastFlushID,
-          lastCompactID, lastLocation, getBulkIngestedFiles()));
+          lastCompactID, lastLocation, bulkImported));
 
       long t2 = System.currentTimeMillis();
 
@@ -2437,7 +2342,7 @@
             "Timeout waiting " + (lockWait / 1000.) + " seconds to get tablet lock for " + extent);
       }
 
-      List<FileRef> alreadyImported = bulkImported.getIfPresent(tid);
+      List<FileRef> alreadyImported = bulkImported.get(tid);
       if (alreadyImported != null) {
         for (FileRef entry : alreadyImported) {
           if (fileMap.remove(entry) != null) {
@@ -2484,7 +2389,7 @@
         }
 
         try {
-          bulkImported.get(tid, ArrayList::new).addAll(fileMap.keySet());
+          bulkImported.computeIfAbsent(tid, k -> new ArrayList<>()).addAll(fileMap.keySet());
         } catch (Exception ex) {
           log.info(ex.toString(), ex);
         }
@@ -2517,7 +2422,7 @@
      * Ensuring referencedLogs accurately tracks these sets ensures in use walogs are not GCed.
      */
 
-    Builder<DfsLogger> builder = ImmutableSet.builder();
+    var builder = ImmutableSet.<DfsLogger>builder();
     builder.addAll(currentLogs);
     builder.addAll(otherLogs);
     referencedLogs = builder.build();
@@ -2714,7 +2619,7 @@
       } else {
         clazz = AccumuloVFSClassLoader.loadClass(clazzName, CompactionStrategy.class);
       }
-      CompactionStrategy strategy = clazz.newInstance();
+      CompactionStrategy strategy = clazz.getDeclaredConstructor().newInstance();
       strategy.init(strategyConfig.getOptions());
       return strategy;
     } catch (Exception e) {
@@ -2815,7 +2720,7 @@
       }
 
       MetadataTableUtil.updateTabletDataFile(tid, extent, paths,
-          tabletTime.getMetadataValue(persistedTime), getTabletServer().getContext(),
+          tabletTime.getMetadataTime(persistedTime), getTabletServer().getContext(),
           getTabletServer().getLock());
     }
 
@@ -2828,10 +2733,10 @@
         persistedTime = maxCommittedTime;
       }
 
-      String time = tabletTime.getMetadataValue(persistedTime);
       MasterMetadataUtil.updateTabletDataFile(getTabletServer().getContext(), extent, newDatafile,
-          absMergeFile, dfv, time, filesInUseByScans, tabletServer.getClientAddressString(),
-          tabletServer.getLock(), unusedWalLogs, lastLocation, flushId);
+          absMergeFile, dfv, tabletTime.getMetadataTime(persistedTime), filesInUseByScans,
+          tabletServer.getClientAddressString(), tabletServer.getLock(), unusedWalLogs,
+          lastLocation, flushId);
     }
 
   }
@@ -2947,14 +2852,12 @@
     }
   }
 
-  public Map<Long,List<FileRef>> getBulkIngestedFiles() {
-    return new HashMap<>(bulkImported.asMap());
+  public Set<Long> getBulkIngestedTxIds() {
+    return bulkImported.keySet();
   }
 
   public void cleanupBulkLoadedFiles(Set<Long> tids) {
-    for (Long tid : tids) {
-      bulkImported.invalidate(tid);
-    }
+    bulkImported.keySet().removeAll(tids);
   }
 
 }
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletData.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletData.java
index 917b0b6..4f501a7 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletData.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletData.java
@@ -16,59 +16,28 @@
  */
 package org.apache.accumulo.tserver.tablet;
 
-import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily.COMPACT_COLUMN;
-import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN;
-import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily.FLUSH_COLUMN;
-import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN;
-
-import java.io.IOException;
 import java.util.ArrayList;
-import java.util.Collection;
 import java.util.HashMap;
 import java.util.HashSet;
-import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
-import java.util.Map.Entry;
 import java.util.SortedMap;
 import java.util.TreeMap;
 
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.dataImpl.KeyExtent;
-import org.apache.accumulo.core.file.FileOperations;
-import org.apache.accumulo.core.file.FileSKVIterator;
-import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.BulkFileColumnFamily;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.DataFileColumnFamily;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LastLocationColumnFamily;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LogColumnFamily;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ScanFileColumnFamily;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily;
+import org.apache.accumulo.core.metadata.schema.MetadataTime;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata;
 import org.apache.accumulo.core.tabletserver.log.LogEntry;
-import org.apache.accumulo.server.ServerContext;
 import org.apache.accumulo.server.fs.FileRef;
 import org.apache.accumulo.server.fs.VolumeManager;
-import org.apache.accumulo.server.fs.VolumeUtil;
 import org.apache.accumulo.server.master.state.TServerInstance;
-import org.apache.accumulo.server.tablets.TabletTime;
-import org.apache.accumulo.server.util.MetadataTableUtil;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
 /*
  * Basic information needed to create a tablet.
  */
 public class TabletData {
-  private static Logger log = LoggerFactory.getLogger(TabletData.class);
-
-  private String time = null;
+  private MetadataTime time = null;
   private SortedMap<FileRef,DataFileValue> dataFiles = new TreeMap<>();
   private List<LogEntry> logEntries = new ArrayList<>();
   private HashSet<FileRef> scanFiles = new HashSet<>();
@@ -80,98 +49,31 @@
   private String directory = null;
 
   // Read tablet data from metadata tables
-  public TabletData(KeyExtent extent, VolumeManager fs, Iterator<Entry<Key,Value>> entries) {
-    final Text family = new Text();
-    Text rowName = extent.getMetadataEntry();
-    while (entries.hasNext()) {
-      Entry<Key,Value> entry = entries.next();
-      Key key = entry.getKey();
-      Value value = entry.getValue();
-      key.getColumnFamily(family);
-      if (key.compareRow(rowName) != 0) {
-        log.info("Unexpected metadata table entry for {}: {}", extent, key.getRow());
-        continue;
-      }
-      if (ServerColumnFamily.TIME_COLUMN.hasColumns(entry.getKey())) {
-        if (time == null) {
-          time = value.toString();
-        }
-      } else if (DataFileColumnFamily.NAME.equals(family)) {
-        FileRef ref = new FileRef(fs, key);
-        dataFiles.put(ref, new DataFileValue(entry.getValue().get()));
-      } else if (DIRECTORY_COLUMN.hasColumns(key)) {
-        directory = value.toString();
-      } else if (family.equals(LogColumnFamily.NAME)) {
-        logEntries.add(LogEntry.fromKeyValue(key, entry.getValue()));
-      } else if (family.equals(ScanFileColumnFamily.NAME)) {
-        scanFiles.add(new FileRef(fs, key));
-      } else if (FLUSH_COLUMN.hasColumns(key)) {
-        flushID = Long.parseLong(value.toString());
-      } else if (COMPACT_COLUMN.hasColumns(key)) {
-        compactID = Long.parseLong(entry.getValue().toString());
-      } else if (family.equals(LastLocationColumnFamily.NAME)) {
-        lastLocation = new TServerInstance(value, key.getColumnQualifier());
-      } else if (family.equals(BulkFileColumnFamily.NAME)) {
-        Long id = MetadataTableUtil.getBulkLoadTid(value);
-        bulkImported.computeIfAbsent(id, l -> new ArrayList<FileRef>()).add(new FileRef(fs, key));
-      } else if (PREV_ROW_COLUMN.hasColumns(key)) {
-        KeyExtent check = new KeyExtent(key.getRow(), value);
-        if (!check.equals(extent)) {
-          throw new RuntimeException("Found bad entry for " + extent + ": " + check);
-        }
-      }
-    }
-    if (time == null && dataFiles.isEmpty() && extent.equals(RootTable.OLD_EXTENT)) {
-      // recovery... old root tablet has no data, so time doesn't matter:
-      time = TabletTime.LOGICAL_TIME_ID + "" + Long.MIN_VALUE;
-    }
-  }
+  public TabletData(KeyExtent extent, VolumeManager fs, TabletMetadata meta) {
 
-  // Read basic root table metadata from zookeeper
-  public TabletData(ServerContext context, VolumeManager fs, AccumuloConfiguration conf)
-      throws IOException {
-    directory =
-        VolumeUtil.switchRootTableVolume(context, MetadataTableUtil.getRootTabletDir(context));
+    this.time = meta.getTime();
+    this.compactID = meta.getCompactId().orElse(-1);
+    this.flushID = meta.getFlushId().orElse(-1);
+    this.directory = meta.getDir();
+    this.logEntries.addAll(meta.getLogs());
+    meta.getScans().forEach(path -> scanFiles.add(new FileRef(fs, path, meta.getTableId())));
 
-    Path location = new Path(directory);
+    if (meta.getLast() != null)
+      this.lastLocation = new TServerInstance(meta.getLast());
 
-    // cleanReplacement() has special handling for deleting files
-    FileStatus[] files = fs.listStatus(location);
-    Collection<String> goodPaths = RootFiles.cleanupReplacement(fs, files, true);
-    long rtime = Long.MIN_VALUE;
-    for (String good : goodPaths) {
-      Path path = new Path(good);
-      String filename = path.getName();
-      FileRef ref = new FileRef(location + "/" + filename, path);
-      DataFileValue dfv = new DataFileValue(0, 0);
-      dataFiles.put(ref, dfv);
+    meta.getFilesMap().forEach((path, dfv) -> {
+      dataFiles.put(new FileRef(fs, path, meta.getTableId()), dfv);
+    });
 
-      FileSystem ns = fs.getVolumeByPath(path).getFileSystem();
-      long maxTime = -1;
-      try (FileSKVIterator reader = FileOperations.getInstance().newReaderBuilder()
-          .forFile(path.toString(), ns, ns.getConf(), context.getCryptoService())
-          .withTableConfiguration(conf).seekToBeginning().build()) {
-        while (reader.hasTop()) {
-          maxTime = Math.max(maxTime, reader.getTopKey().getTimestamp());
-          reader.next();
-        }
-      }
-      if (maxTime > rtime) {
-        time = TabletTime.LOGICAL_TIME_ID + "" + maxTime;
-        rtime = maxTime;
-      }
-    }
-
-    try {
-      logEntries = MetadataTableUtil.getLogEntries(context, RootTable.EXTENT);
-    } catch (Exception ex) {
-      throw new RuntimeException("Unable to read tablet log entries", ex);
-    }
+    meta.getLoaded().forEach((path, txid) -> {
+      bulkImported.computeIfAbsent(txid, k -> new ArrayList<FileRef>())
+          .add(new FileRef(fs, path, meta.getTableId()));
+    });
   }
 
   // Data pulled from an existing tablet to make a split
   public TabletData(String tabletDirectory, SortedMap<FileRef,DataFileValue> highDatafileSizes,
-      String time, long lastFlushID, long lastCompactID, TServerInstance lastLocation,
+      MetadataTime time, long lastFlushID, long lastCompactID, TServerInstance lastLocation,
       Map<Long,List<FileRef>> bulkIngestedFiles) {
     this.directory = tabletDirectory;
     this.dataFiles = highDatafileSizes;
@@ -183,7 +85,7 @@
     this.splitTime = System.currentTimeMillis();
   }
 
-  public String getTime() {
+  public MetadataTime getTime() {
     return time;
   }
 
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/CheckTabletMetadataTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/CheckTabletMetadataTest.java
index 15d552f..bdfa737 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/CheckTabletMetadataTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/CheckTabletMetadataTest.java
@@ -17,9 +17,11 @@
 
 package org.apache.accumulo.tserver;
 
+import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
 
+import java.util.EnumSet;
 import java.util.TreeMap;
 
 import org.apache.accumulo.core.data.Key;
@@ -27,6 +29,8 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.dataImpl.KeyExtent;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType;
 import org.apache.accumulo.core.util.ColumnFQ;
 import org.apache.accumulo.server.master.state.TServerInstance;
 import org.apache.hadoop.io.Text;
@@ -55,9 +59,11 @@
 
   private static void assertFail(TreeMap<Key,Value> tabletMeta, KeyExtent ke, TServerInstance tsi) {
     try {
-      assertNull(TabletServer.checkTabletMetadata(ke, tsi, tabletMeta, ke.getMetadataEntry()));
+      TabletMetadata tm = TabletMetadata.convertRow(tabletMeta.entrySet().iterator(),
+          EnumSet.allOf(ColumnType.class), true);
+      assertFalse(TabletServer.checkTabletMetadata(ke, tsi, tm));
     } catch (Exception e) {
-
+      e.printStackTrace();
     }
   }
 
@@ -66,9 +72,11 @@
     TreeMap<Key,Value> copy = new TreeMap<>(tabletMeta);
     assertNotNull(copy.remove(keyToDelete));
     try {
-      assertNull(TabletServer.checkTabletMetadata(ke, tsi, copy, ke.getMetadataEntry()));
+      TabletMetadata tm = TabletMetadata.convertRow(copy.entrySet().iterator(),
+          EnumSet.allOf(ColumnType.class), true);
+      assertFalse(TabletServer.checkTabletMetadata(ke, tsi, tm));
     } catch (Exception e) {
-
+      e.printStackTrace();
     }
   }
 
@@ -87,7 +95,9 @@
 
     TServerInstance tsi = new TServerInstance("127.0.0.1:9997", 4);
 
-    assertNotNull(TabletServer.checkTabletMetadata(ke, tsi, tabletMeta, ke.getMetadataEntry()));
+    TabletMetadata tm = TabletMetadata.convertRow(tabletMeta.entrySet().iterator(),
+        EnumSet.allOf(ColumnType.class), true);
+    assertTrue(TabletServer.checkTabletMetadata(ke, tsi, tm));
 
     assertFail(tabletMeta, ke, new TServerInstance("127.0.0.1:9998", 4));
     assertFail(tabletMeta, ke, new TServerInstance("127.0.0.1:9998", 5));
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/InMemoryMapTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/InMemoryMapTest.java
index a7b72db..412185b 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/InMemoryMapTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/InMemoryMapTest.java
@@ -27,6 +27,7 @@
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.HashSet;
+import java.util.Map;
 import java.util.Map.Entry;
 import java.util.Set;
 import java.util.TreeMap;
@@ -65,8 +66,6 @@
 import org.junit.rules.ExpectedException;
 import org.junit.rules.TemporaryFolder;
 
-import com.google.common.collect.ImmutableMap;
-
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 
 @SuppressFBWarnings(value = "PATH_TRAVERSAL_IN", justification = "paths not set by user input")
@@ -561,7 +560,7 @@
   public void testSample() throws Exception {
 
     SamplerConfigurationImpl sampleConfig = new SamplerConfigurationImpl(RowSampler.class.getName(),
-        ImmutableMap.of("hasher", "murmur3_32", "modulus", "7"));
+        Map.of("hasher", "murmur3_32", "modulus", "7"));
     Sampler sampler = SamplerFactory.newSampler(sampleConfig, DefaultConfiguration.getInstance());
 
     ConfigurationCopy config1 = newConfig(tempFolder.newFolder().getAbsolutePath());
@@ -656,7 +655,7 @@
   private void runInterruptSampleTest(boolean deepCopy, boolean delete, boolean dcAfterDelete)
       throws Exception {
     SamplerConfigurationImpl sampleConfig1 = new SamplerConfigurationImpl(
-        RowSampler.class.getName(), ImmutableMap.of("hasher", "murmur3_32", "modulus", "2"));
+        RowSampler.class.getName(), Map.of("hasher", "murmur3_32", "modulus", "2"));
     Sampler sampler = SamplerFactory.newSampler(sampleConfig1, DefaultConfiguration.getInstance());
 
     ConfigurationCopy config1 = newConfig(tempFolder.newFolder().getAbsolutePath());
@@ -717,7 +716,7 @@
   @Test(expected = SampleNotPresentException.class)
   public void testDifferentSampleConfig() throws Exception {
     SamplerConfigurationImpl sampleConfig = new SamplerConfigurationImpl(RowSampler.class.getName(),
-        ImmutableMap.of("hasher", "murmur3_32", "modulus", "7"));
+        Map.of("hasher", "murmur3_32", "modulus", "7"));
 
     ConfigurationCopy config1 = newConfig(tempFolder.newFolder().getAbsolutePath());
     for (Entry<String,String> entry : sampleConfig.toTablePropertiesMap().entrySet()) {
@@ -729,7 +728,7 @@
     mutate(imm, "r", "cf:cq", 5, "b");
 
     SamplerConfigurationImpl sampleConfig2 = new SamplerConfigurationImpl(
-        RowSampler.class.getName(), ImmutableMap.of("hasher", "murmur3_32", "modulus", "9"));
+        RowSampler.class.getName(), Map.of("hasher", "murmur3_32", "modulus", "9"));
     MemoryIterator iter = imm.skvIterator(sampleConfig2);
     iter.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
   }
@@ -741,7 +740,7 @@
     mutate(imm, "r", "cf:cq", 5, "b");
 
     SamplerConfigurationImpl sampleConfig2 = new SamplerConfigurationImpl(
-        RowSampler.class.getName(), ImmutableMap.of("hasher", "murmur3_32", "modulus", "9"));
+        RowSampler.class.getName(), Map.of("hasher", "murmur3_32", "modulus", "9"));
     MemoryIterator iter = imm.skvIterator(sampleConfig2);
     iter.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
   }
@@ -751,7 +750,7 @@
     InMemoryMap imm = newInMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
 
     SamplerConfigurationImpl sampleConfig2 = new SamplerConfigurationImpl(
-        RowSampler.class.getName(), ImmutableMap.of("hasher", "murmur3_32", "modulus", "9"));
+        RowSampler.class.getName(), Map.of("hasher", "murmur3_32", "modulus", "9"));
 
     // when in mem map is empty should be able to get sample iterator with any sample config
     MemoryIterator iter = imm.skvIterator(sampleConfig2);
@@ -762,7 +761,7 @@
   @Test
   public void testDeferredSamplerCreation() throws Exception {
     SamplerConfigurationImpl sampleConfig1 = new SamplerConfigurationImpl(
-        RowSampler.class.getName(), ImmutableMap.of("hasher", "murmur3_32", "modulus", "9"));
+        RowSampler.class.getName(), Map.of("hasher", "murmur3_32", "modulus", "9"));
 
     ConfigurationCopy config1 = newConfig(tempFolder.newFolder().getAbsolutePath());
     for (Entry<String,String> entry : sampleConfig1.toTablePropertiesMap().entrySet()) {
@@ -773,7 +772,7 @@
 
     // change sampler config after creating in mem map.
     SamplerConfigurationImpl sampleConfig2 = new SamplerConfigurationImpl(
-        RowSampler.class.getName(), ImmutableMap.of("hasher", "murmur3_32", "modulus", "7"));
+        RowSampler.class.getName(), Map.of("hasher", "murmur3_32", "modulus", "7"));
     for (Entry<String,String> entry : sampleConfig2.toTablePropertiesMap().entrySet()) {
       config1.set(entry.getKey(), entry.getValue());
     }
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/TabletServerSyncCheckTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/TabletServerSyncCheckTest.java
index 90c495c..5fcecac 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/TabletServerSyncCheckTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/TabletServerSyncCheckTest.java
@@ -36,8 +36,6 @@
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-
 public class TabletServerSyncCheckTest {
   private static final String DFS_SUPPORT_APPEND = "dfs.support.append";
 
@@ -47,8 +45,7 @@
     conf.set(DFS_SUPPORT_APPEND, "false");
 
     FileSystem fs = new TestFileSystem(conf);
-    TestVolumeManagerImpl vm =
-        new TestVolumeManagerImpl(ImmutableMap.of("foo", new VolumeImpl(fs, "/")));
+    TestVolumeManagerImpl vm = new TestVolumeManagerImpl(Map.of("foo", new VolumeImpl(fs, "/")));
 
     vm.ensureSyncIsEnabled();
   }
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/CompactionPlanTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/CompactionPlanTest.java
index 1b7d00d..f6bacf7 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/CompactionPlanTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/CompactionPlanTest.java
@@ -24,8 +24,6 @@
 import org.junit.Test;
 import org.junit.rules.ExpectedException;
 
-import com.google.common.collect.ImmutableSet;
-
 public class CompactionPlanTest {
 
   @Rule
@@ -43,7 +41,7 @@
     cp1.deleteFiles.add(fr1);
     cp1.deleteFiles.add(fr2);
 
-    Set<FileRef> allFiles = ImmutableSet.of(fr1, fr2);
+    Set<FileRef> allFiles = Set.of(fr1, fr2);
 
     exception.expect(IllegalStateException.class);
     cp1.validate(allFiles);
@@ -61,7 +59,7 @@
     cp1.inputFiles.add(fr2);
     cp1.inputFiles.add(fr3);
 
-    Set<FileRef> allFiles = ImmutableSet.of(fr1, fr2);
+    Set<FileRef> allFiles = Set.of(fr1, fr2);
 
     exception.expect(IllegalStateException.class);
     cp1.validate(allFiles);
@@ -79,7 +77,7 @@
     cp1.deleteFiles.add(fr2);
     cp1.deleteFiles.add(fr3);
 
-    Set<FileRef> allFiles = ImmutableSet.of(fr1, fr2);
+    Set<FileRef> allFiles = Set.of(fr1, fr2);
 
     exception.expect(IllegalStateException.class);
     cp1.validate(allFiles);
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/constraints/ConstraintCheckerTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/constraints/ConstraintCheckerTest.java
index 6e34722..d71d5a5 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/constraints/ConstraintCheckerTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/constraints/ConstraintCheckerTest.java
@@ -38,8 +38,6 @@
 import org.junit.Before;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableList;
-import com.google.common.collect.ImmutableSet;
 import com.google.common.collect.Iterables;
 
 public class ConstraintCheckerTest {
@@ -74,7 +72,7 @@
     Constraint c = createMock(Constraint.class);
     short code1 = 2;
     short code2 = 4;
-    List<Short> vCodes = ImmutableList.of(code1, code2);
+    List<Short> vCodes = List.of(code1, code2);
     expect(c.getViolationDescription(code1)).andReturn("ViolationCode1");
     expect(c.getViolationDescription(code2)).andReturn("ViolationCode2");
     expect(c.check(env, m)).andReturn(vCodes);
@@ -122,7 +120,7 @@
     for (ConstraintViolationSummary cvs : cvsList) {
       violationDescs.add(cvs.getViolationDescription());
     }
-    assertEquals(ImmutableSet.of("ViolationCode1", "ViolationCode2"), violationDescs);
+    assertEquals(Set.of("ViolationCode1", "ViolationCode2"), violationDescs);
   }
 
   @Test
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/TabletMutationPrepAttemptTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/TabletMutationPrepAttemptTest.java
new file mode 100644
index 0000000..2a799e7
--- /dev/null
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/TabletMutationPrepAttemptTest.java
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.tserver.tablet;
+
+import static org.easymock.EasyMock.mock;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertSame;
+import static org.junit.Assert.assertTrue;
+
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.accumulo.core.constraints.Violations;
+import org.apache.accumulo.core.data.Mutation;
+import org.junit.Test;
+
+public class TabletMutationPrepAttemptTest {
+
+  public void ensureTabletClosed() {
+    PreparedMutations prepared = new PreparedMutations();
+    assertTrue(prepared.tabletClosed());
+  }
+
+  @Test(expected = IllegalStateException.class)
+  public void callGetSessionWhenClosed() {
+    PreparedMutations prepared = new PreparedMutations();
+    prepared.getCommitSession();
+  }
+
+  @Test(expected = IllegalStateException.class)
+  public void callGetNonViolatorsWhenClosed() {
+    PreparedMutations prepared = new PreparedMutations();
+    prepared.getNonViolators();
+  }
+
+  @Test(expected = IllegalStateException.class)
+  public void callGetViolatorsWhenClosed() {
+    PreparedMutations prepared = new PreparedMutations();
+    prepared.getViolators();
+  }
+
+  @Test(expected = IllegalStateException.class)
+  public void callGetViolationsWhenClosed() {
+    PreparedMutations prepared = new PreparedMutations();
+    prepared.getViolations();
+  }
+
+  public void testTabletOpen() {
+    CommitSession cs = mock(CommitSession.class);
+    List<Mutation> nonViolators = new ArrayList<Mutation>();
+    Violations violations = new Violations();
+    Set<Mutation> violators = new HashSet<Mutation>();
+
+    PreparedMutations prepared = new PreparedMutations(cs, nonViolators, violations, violators);
+
+    assertFalse(prepared.tabletClosed());
+    assertSame(cs, prepared.getCommitSession());
+    assertSame(nonViolators, prepared.getNonViolators());
+    assertSame(violations, prepared.getViolations());
+    assertSame(violators, prepared.getNonViolators());
+  }
+}
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/TabletTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/TabletTest.java
index ea9c9ff..2adc9a1 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/TabletTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/TabletTest.java
@@ -21,16 +21,10 @@
 import java.util.Collections;
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.ConfigurationObserver;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.dataImpl.KeyExtent;
 import org.apache.accumulo.server.conf.TableConfiguration;
-import org.apache.accumulo.server.tablets.TabletTime;
-import org.apache.accumulo.tserver.TabletServer;
-import org.apache.accumulo.tserver.TabletServerResourceManager.TabletResourceManager;
 import org.apache.accumulo.tserver.compaction.CompactionPlan;
 import org.apache.accumulo.tserver.compaction.WriteParameters;
-import org.apache.hadoop.fs.Path;
 import org.easymock.EasyMock;
 import org.junit.Test;
 
@@ -42,16 +36,6 @@
     CompactionPlan plan = EasyMock.createMock(CompactionPlan.class);
     WriteParameters writeParams = EasyMock.createMock(WriteParameters.class);
     plan.writeParameters = writeParams;
-    DatafileManager dfm = EasyMock.createMock(DatafileManager.class);
-    TabletTime time = EasyMock.createMock(TabletTime.class);
-    TabletServer tserver = EasyMock.createMock(TabletServer.class);
-    TabletResourceManager tserverResourceManager = EasyMock.createMock(TabletResourceManager.class);
-    TabletMemory tabletMemory = EasyMock.createMock(TabletMemory.class);
-    KeyExtent extent = EasyMock.createMock(KeyExtent.class);
-    ConfigurationObserver obs = EasyMock.createMock(ConfigurationObserver.class);
-
-    Tablet tablet = new Tablet(time, "", 0, new Path("/foo"), dfm, tserver, tserverResourceManager,
-        tabletMemory, tableConf, extent, obs);
 
     long hdfsBlockSize = 10000L, blockSize = 5000L, indexBlockSize = 500L;
     int replication = 5;
@@ -66,7 +50,7 @@
 
     EasyMock.replay(tableConf, plan, writeParams);
 
-    AccumuloConfiguration aConf = tablet.createTableConfiguration(tableConf, plan);
+    AccumuloConfiguration aConf = Tablet.createCompactionConfiguration(tableConf, plan);
 
     EasyMock.verify(tableConf, plan, writeParams);
 
@@ -77,5 +61,4 @@
     assertEquals(compressType, aConf.get(Property.TABLE_FILE_COMPRESSION_TYPE));
     assertEquals(replication, Integer.parseInt(aConf.get(Property.TABLE_FILE_REPLICATION)));
   }
-
 }
diff --git a/shell/pom.xml b/shell/pom.xml
index cf94610..b46e455 100644
--- a/shell/pom.xml
+++ b/shell/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-shell</artifactId>
   <name>Apache Accumulo Shell</name>
@@ -73,10 +73,6 @@
     </dependency>
     <dependency>
       <groupId>org.apache.commons</groupId>
-      <artifactId>commons-lang3</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.commons</groupId>
       <artifactId>commons-vfs2</artifactId>
     </dependency>
     <dependency>
diff --git a/shell/src/main/java/org/apache/accumulo/shell/Shell.java b/shell/src/main/java/org/apache/accumulo/shell/Shell.java
index eb6173a..a4176cb 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/Shell.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/Shell.java
@@ -44,6 +44,7 @@
 import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
+import org.apache.accumulo.core.cli.ClientOpts.PasswordConverter;
 import org.apache.accumulo.core.client.Accumulo;
 import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.AccumuloException;
@@ -329,7 +330,7 @@
         Runtime.getRuntime()
             .addShutdownHook(new Thread(() -> reader.getTerminal().setEchoEnabled(true)));
         // Read password if the user explicitly asked for it, or didn't specify anything at all
-        if ("stdin".equals(password) || password == null) {
+        if (PasswordConverter.STDIN.equals(password) || password == null) {
           password = reader.readLine("Password: ", '*');
         }
         if (password == null) {
@@ -564,7 +565,7 @@
     ShellCompletor userCompletor = null;
 
     if (execFile != null) {
-      try (java.util.Scanner scanner = new java.util.Scanner(execFile, UTF_8.name())) {
+      try (java.util.Scanner scanner = new java.util.Scanner(execFile, UTF_8)) {
         while (scanner.hasNextLine() && !hasExited()) {
           execCommand(scanner.nextLine(), true, isVerbose());
         }
diff --git a/shell/src/main/java/org/apache/accumulo/shell/ShellOptionsJC.java b/shell/src/main/java/org/apache/accumulo/shell/ShellOptionsJC.java
index 674c900..c7c0ddf 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/ShellOptionsJC.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/ShellOptionsJC.java
@@ -16,26 +16,21 @@
  */
 package org.apache.accumulo.shell;
 
-import static java.nio.charset.StandardCharsets.UTF_8;
-
 import java.io.File;
-import java.io.FileNotFoundException;
 import java.util.ArrayList;
 import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 import java.util.Properties;
-import java.util.Scanner;
 import java.util.TreeMap;
 
+import org.apache.accumulo.core.cli.ClientOpts;
 import org.apache.accumulo.core.clientImpl.ClientInfoImpl;
 import org.apache.accumulo.core.conf.ClientProperty;
 import org.apache.hadoop.security.UserGroupInformation;
 
 import com.beust.jcommander.DynamicParameter;
-import com.beust.jcommander.IStringConverter;
 import com.beust.jcommander.Parameter;
-import com.beust.jcommander.ParameterException;
 import com.beust.jcommander.converters.FileConverter;
 
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
@@ -45,84 +40,12 @@
   @Parameter(names = {"-u", "--user"}, description = "username")
   private String username = null;
 
-  public static class PasswordConverter implements IStringConverter<String> {
-    public static final String STDIN = "stdin";
-
-    private enum KeyType {
-      PASS("pass:"), ENV("env:") {
-        @Override
-        String process(String value) {
-          return System.getenv(value);
-        }
-      },
-      FILE("file:") {
-        @SuppressFBWarnings(value = "PATH_TRAVERSAL_IN",
-            justification = "app is run in same security context as user providing the filename")
-        @Override
-        String process(String value) {
-          Scanner scanner = null;
-          try {
-            scanner = new Scanner(new File(value), UTF_8.name());
-            return scanner.nextLine();
-          } catch (FileNotFoundException e) {
-            throw new ParameterException(e);
-          } finally {
-            if (scanner != null) {
-              scanner.close();
-            }
-          }
-        }
-      },
-      STDIN(PasswordConverter.STDIN) {
-        @Override
-        public boolean matches(String value) {
-          return prefix.equals(value);
-        }
-
-        @Override
-        public String convert(String value) {
-          // Will check for this later
-          return prefix;
-        }
-      };
-
-      String prefix;
-
-      private KeyType(String prefix) {
-        this.prefix = prefix;
-      }
-
-      public boolean matches(String value) {
-        return value.startsWith(prefix);
-      }
-
-      public String convert(String value) {
-        return process(value.substring(prefix.length()));
-      }
-
-      String process(String value) {
-        return value;
-      }
-    }
-
-    @Override
-    public String convert(String value) {
-      for (KeyType keyType : KeyType.values()) {
-        if (keyType.matches(value)) {
-          return keyType.convert(value);
-        }
-      }
-
-      return value;
-    }
-  }
-
   // Note: Don't use "password = true" because then it will prompt even if we have a token
   @Parameter(names = {"-p", "--password"},
-      description = "password (can be specified as 'pass:<password>',"
+      description = "password (can be specified as '<password>', 'pass:<password>',"
           + " 'file:<local file containing the password>', 'env:<variable containing"
           + " the pass>', or stdin)",
-      converter = PasswordConverter.class)
+      converter = ClientOpts.PasswordConverter.class)
   private String password;
 
   @DynamicParameter(names = {"-l"},
diff --git a/shell/src/main/java/org/apache/accumulo/shell/ShellUtil.java b/shell/src/main/java/org/apache/accumulo/shell/ShellUtil.java
index 6a1e0ee..cdc6681 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/ShellUtil.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/ShellUtil.java
@@ -19,7 +19,7 @@
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.io.File;
-import java.io.FileNotFoundException;
+import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Base64;
 import java.util.Collections;
@@ -32,7 +32,6 @@
 import org.apache.hadoop.io.Text;
 
 import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableMap.Builder;
 
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 
@@ -48,15 +47,13 @@
    * @param decode
    *          Whether to decode lines in the file
    * @return List of {@link Text} objects containing data in the given file
-   * @throws FileNotFoundException
-   *           if the given file doesn't exist
    */
   @SuppressFBWarnings(value = "PATH_TRAVERSAL_IN",
       justification = "app is run in same security context as user providing the filename")
-  public static List<Text> scanFile(String filename, boolean decode) throws FileNotFoundException {
+  public static List<Text> scanFile(String filename, boolean decode) throws IOException {
     String line;
     List<Text> result = new ArrayList<>();
-    try (Scanner file = new Scanner(new File(filename), UTF_8.name())) {
+    try (Scanner file = new Scanner(new File(filename), UTF_8)) {
       while (file.hasNextLine()) {
         line = file.nextLine();
         if (!line.isEmpty()) {
@@ -69,7 +66,7 @@
 
   public static Map<String,String> parseMapOpt(CommandLine cl, Option opt) {
     if (cl.hasOption(opt.getLongOpt())) {
-      Builder<String,String> builder = ImmutableMap.builder();
+      var builder = ImmutableMap.<String,String>builder();
       String[] keyVals = cl.getOptionValue(opt.getLongOpt()).split(",");
       for (String keyVal : keyVals) {
         String[] sa = keyVal.split("=");
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/ExecfileCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/ExecfileCommand.java
index d1df7e6..052ea247 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/ExecfileCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/ExecfileCommand.java
@@ -42,7 +42,7 @@
   @Override
   public int execute(final String fullCommand, final CommandLine cl, final Shell shellState)
       throws Exception {
-    try (Scanner scanner = new Scanner(new File(cl.getArgs()[0]), UTF_8.name())) {
+    try (Scanner scanner = new Scanner(new File(cl.getArgs()[0]), UTF_8)) {
       while (scanner.hasNextLine()) {
         shellState.execCommand(scanner.nextLine(), true, cl.hasOption(verboseOption.getOpt()));
       }
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/FateCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/FateCommand.java
index 35eddab..200e7d1 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/FateCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/FateCommand.java
@@ -122,7 +122,7 @@
   public int execute(final String fullCommand, final CommandLine cl, final Shell shellState)
       throws ParseException, KeeperException, InterruptedException, IOException {
     ClientContext context = shellState.getContext();
-    SiteConfiguration siteConfig = new SiteConfiguration();
+    var siteConfig = SiteConfiguration.auto();
     String[] args = cl.getArgs();
     if (args.length <= 0) {
       throw new ParseException("Must provide a command to execute");
@@ -183,11 +183,11 @@
       if (cl.hasOption(statusOption.getOpt())) {
         filterStatus = EnumSet.noneOf(TStatus.class);
         String[] tstat = cl.getOptionValues(statusOption.getOpt());
-        for (int i = 0; i < tstat.length; i++) {
+        for (String element : tstat) {
           try {
-            filterStatus.add(TStatus.valueOf(tstat[i]));
+            filterStatus.add(TStatus.valueOf(element));
           } catch (IllegalArgumentException iae) {
-            System.out.printf("Invalid transaction status name: %s%n", tstat[i]);
+            System.out.printf("Invalid transaction status name: %s%n", element);
             return 1;
           }
         }
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/GetAuthsCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/GetAuthsCommand.java
index 6b64a06..6755006 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/GetAuthsCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/GetAuthsCommand.java
@@ -30,7 +30,6 @@
 import org.apache.commons.cli.CommandLine;
 import org.apache.commons.cli.Option;
 import org.apache.commons.cli.Options;
-import org.apache.commons.lang3.StringUtils;
 
 public class GetAuthsCommand extends Command {
   private Option userOpt;
@@ -44,7 +43,7 @@
     Authorizations auths =
         shellState.getAccumuloClient().securityOperations().getUserAuthorizations(user);
     List<String> set = sortAuthorizations(auths);
-    shellState.getReader().println(StringUtils.join(set, ','));
+    shellState.getReader().println(String.join(",", set));
     return 0;
   }
 
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/ScanCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/ScanCommand.java
index 899a888..273b188 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/ScanCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/ScanCommand.java
@@ -240,7 +240,7 @@
     if (clazz == null)
       clazz = DefaultScanInterpreter.class;
 
-    return clazz.newInstance();
+    return clazz.getDeclaredConstructor().newInstance();
   }
 
   protected Class<? extends Formatter> getFormatter(final CommandLine cl, final String tableName,
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java
index 0509b25..ba3f1a1 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java
@@ -45,7 +45,6 @@
 import org.apache.commons.cli.Option;
 import org.apache.commons.cli.OptionGroup;
 import org.apache.commons.cli.Options;
-import org.apache.commons.lang3.StringUtils;
 
 import jline.console.ConsoleReader;
 
@@ -85,15 +84,14 @@
     // we are setting a shell iterator). If so, temporarily set the table state to an
     // existing table such as accumulo.metadata. This allows the command to complete successfully.
     // After completion reassign the table to its original value and continue.
-    String currentTableName = null;
+    String currentTableName = shellState.getTableName();
     String tmpTable = null;
     String configuredName;
     try {
-      if (profileOpt != null && StringUtils.isBlank(shellState.getTableName())) {
-        currentTableName = shellState.getTableName();
+      if (profileOpt != null && (currentTableName == null || currentTableName.isBlank())) {
         tmpTable = "accumulo.metadata";
         shellState.setTableName(tmpTable);
-        tables = cl.hasOption(OptUtil.tableOpt().getOpt()) || !shellState.getTableName().isEmpty();
+        tables = cl.hasOption(OptUtil.tableOpt().getOpt()) || !currentTableName.isEmpty();
       }
       ClassLoader classloader = shellState.getClassLoader(cl, shellState);
       // Get the iterator options, with potentially a name provided by the OptionDescriber impl or
@@ -205,8 +203,8 @@
     Class<? extends SortedKeyValueIterator> clazz;
     try {
       clazz = classloader.loadClass(className).asSubclass(SortedKeyValueIterator.class);
-      untypedInstance = clazz.newInstance();
-    } catch (ClassNotFoundException e) {
+      untypedInstance = clazz.getDeclaredConstructor().newInstance();
+    } catch (ReflectiveOperationException e) {
       StringBuilder msg = new StringBuilder("Unable to load ").append(className);
       if (className.indexOf('.') < 0) {
         msg.append("; did you use a fully qualified package name?");
@@ -214,8 +212,6 @@
         msg.append("; class not found.");
       }
       throw new ShellCommandException(ErrorCode.INITIALIZATION_FAILURE, msg.toString());
-    } catch (InstantiationException | IllegalAccessException e) {
-      throw new IllegalArgumentException(e.getMessage());
     } catch (ClassCastException e) {
       String msg = className + " loaded successfully but does not implement SortedKeyValueIterator."
           + " This class cannot be used with this command.";
@@ -309,7 +305,7 @@
       if (iteratorName == null) {
         reader.println();
         throw new IOException("Input stream closed");
-      } else if (StringUtils.isWhitespace(iteratorName)) {
+      } else if (iteratorName.isBlank()) {
         // Treat whitespace or empty string as no name provided
         iteratorName = null;
       }
@@ -325,7 +321,7 @@
         if (input == null) {
           reader.println();
           throw new IOException("Input stream closed");
-        } else if (StringUtils.isWhitespace(input)) {
+        } else if (input.isBlank()) {
           break;
         }
 
diff --git a/shell/src/test/java/org/apache/accumulo/shell/ShellUtilTest.java b/shell/src/test/java/org/apache/accumulo/shell/ShellUtilTest.java
index fc5d089..9594d6b 100644
--- a/shell/src/test/java/org/apache/accumulo/shell/ShellUtilTest.java
+++ b/shell/src/test/java/org/apache/accumulo/shell/ShellUtilTest.java
@@ -31,8 +31,6 @@
 import org.junit.Test;
 import org.junit.rules.TemporaryFolder;
 
-import com.google.common.collect.ImmutableList;
-
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 
 @SuppressFBWarnings(value = "PATH_TRAVERSAL_IN", justification = "paths not set by user input")
@@ -53,7 +51,7 @@
     File testFile = new File(folder.getRoot(), "testFileNoDecode.txt");
     FileUtils.writeStringToFile(testFile, FILEDATA, UTF_8);
     List<Text> output = ShellUtil.scanFile(testFile.getAbsolutePath(), false);
-    assertEquals(ImmutableList.of(new Text("line1"), new Text("line2")), output);
+    assertEquals(List.of(new Text("line1"), new Text("line2")), output);
   }
 
   @Test
@@ -61,11 +59,11 @@
     File testFile = new File(folder.getRoot(), "testFileWithDecode.txt");
     FileUtils.writeStringToFile(testFile, B64_FILEDATA, UTF_8);
     List<Text> output = ShellUtil.scanFile(testFile.getAbsolutePath(), true);
-    assertEquals(ImmutableList.of(new Text("line1"), new Text("line2")), output);
+    assertEquals(List.of(new Text("line1"), new Text("line2")), output);
   }
 
   @Test(expected = FileNotFoundException.class)
-  public void testWithMissingFile() throws FileNotFoundException {
+  public void testWithMissingFile() throws IOException {
     ShellUtil.scanFile("missingFile.txt", false);
   }
 }
diff --git a/shell/src/test/java/org/apache/accumulo/shell/commands/SetIterCommandTest.java b/shell/src/test/java/org/apache/accumulo/shell/commands/SetIterCommandTest.java
index f120af2..f3debec 100644
--- a/shell/src/test/java/org/apache/accumulo/shell/commands/SetIterCommandTest.java
+++ b/shell/src/test/java/org/apache/accumulo/shell/commands/SetIterCommandTest.java
@@ -117,6 +117,8 @@
         EasyMock.eq(EnumSet.allOf(IteratorScope.class)));
     EasyMock.expectLastCall().once();
 
+    EasyMock.expect(shellState.getTableName()).andReturn("foo").anyTimes();
+
     EasyMock.replay(client, cli, shellState, reader, tableOperations);
 
     cmd.execute(
diff --git a/start/pom.xml b/start/pom.xml
index cc574ff..922e66c 100644
--- a/start/pom.xml
+++ b/start/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-start</artifactId>
   <name>Apache Accumulo Start</name>
diff --git a/start/src/main/java/org/apache/accumulo/start/Main.java b/start/src/main/java/org/apache/accumulo/start/Main.java
index 373e5a3..a006fe7 100644
--- a/start/src/main/java/org/apache/accumulo/start/Main.java
+++ b/start/src/main/java/org/apache/accumulo/start/Main.java
@@ -57,7 +57,7 @@
       }
       Object conf = null;
       try {
-        conf = confClass.newInstance();
+        conf = confClass.getDeclaredConstructor().newInstance();
       } catch (Exception e) {
         log.error("Error creating new instance of Hadoop Configuration", e);
         System.exit(1);
@@ -103,8 +103,7 @@
       try {
         classLoader = (ClassLoader) getVFSClassLoader().getMethod("getClassLoader").invoke(null);
         Thread.currentThread().setContextClassLoader(classLoader);
-      } catch (ClassNotFoundException | IOException | IllegalAccessException
-          | IllegalArgumentException | InvocationTargetException | NoSuchMethodException
+      } catch (IOException | IllegalArgumentException | ReflectiveOperationException
           | SecurityException e) {
         log.error("Problem initializing the class loader", e);
         System.exit(1);
diff --git a/start/src/main/java/org/apache/accumulo/test/categories/PerformanceTests.java b/start/src/main/java/org/apache/accumulo/test/categories/PerformanceTests.java
deleted file mode 100644
index 5a0bd82..0000000
--- a/start/src/main/java/org/apache/accumulo/test/categories/PerformanceTests.java
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.test.categories;
-
-/**
- * Annotate integration tests which test performance-related aspects of Accumulo or are sensitive to
- * timings and hardware capabilities.
- * <p>
- * Intended to be used with the JUnit Category annotation on integration test classes. The Category
- * annotation should be placed at the class-level. Test class names should still be suffixed with
- * 'IT' as the rest of the integration tests.
- */
-public interface PerformanceTests {}
diff --git a/start/src/test/java/org/apache/accumulo/start/classloader/vfs/AccumuloReloadingVFSClassLoaderTest.java b/start/src/test/java/org/apache/accumulo/start/classloader/vfs/AccumuloReloadingVFSClassLoaderTest.java
index d0c9f33..6757f11 100644
--- a/start/src/test/java/org/apache/accumulo/start/classloader/vfs/AccumuloReloadingVFSClassLoaderTest.java
+++ b/start/src/test/java/org/apache/accumulo/start/classloader/vfs/AccumuloReloadingVFSClassLoaderTest.java
@@ -99,7 +99,7 @@
     arvcl.setMaxRetries(1);
 
     Class<?> clazz1 = arvcl.getClassLoader().loadClass("test.HelloWorld");
-    Object o1 = clazz1.newInstance();
+    Object o1 = clazz1.getDeclaredConstructor().newInstance();
     assertEquals("Hello World!", o1.toString());
 
     // Check that the class is the same before the update
@@ -120,7 +120,7 @@
     Thread.sleep(7000);
 
     Class<?> clazz2 = arvcl.getClassLoader().loadClass("test.HelloWorld");
-    Object o2 = clazz2.newInstance();
+    Object o2 = clazz2.getDeclaredConstructor().newInstance();
     assertEquals("Hello World!", o2.toString());
 
     // This is false because they are loaded by a different classloader
@@ -145,7 +145,7 @@
     arvcl.setMaxRetries(3);
 
     Class<?> clazz1 = arvcl.getClassLoader().loadClass("test.HelloWorld");
-    Object o1 = clazz1.newInstance();
+    Object o1 = clazz1.getDeclaredConstructor().newInstance();
     assertEquals("Hello World!", o1.toString());
 
     // Check that the class is the same before the update
@@ -166,7 +166,7 @@
     Thread.sleep(7000);
 
     Class<?> clazz2 = arvcl.getClassLoader().loadClass("test.HelloWorld");
-    Object o2 = clazz2.newInstance();
+    Object o2 = clazz2.getDeclaredConstructor().newInstance();
     assertEquals("Hello World!", o2.toString());
 
     // This is true because they are loaded by the same classloader due to the new retry
diff --git a/start/src/test/java/org/apache/accumulo/start/classloader/vfs/AccumuloVFSClassLoaderTest.java b/start/src/test/java/org/apache/accumulo/start/classloader/vfs/AccumuloVFSClassLoaderTest.java
index 384605e..6f07eec 100644
--- a/start/src/test/java/org/apache/accumulo/start/classloader/vfs/AccumuloVFSClassLoaderTest.java
+++ b/start/src/test/java/org/apache/accumulo/start/classloader/vfs/AccumuloVFSClassLoaderTest.java
@@ -142,7 +142,7 @@
     // We can't be sure what the authority/host will be due to FQDN mappings, so just check the path
     assertTrue(arvcl.getFileObjects()[0].getURL().toString().contains("HelloWorld.jar"));
     Class<?> clazz1 = arvcl.loadClass("test.HelloWorld");
-    Object o1 = clazz1.newInstance();
+    Object o1 = clazz1.getDeclaredConstructor().newInstance();
     assertEquals("Hello World!", o1.toString());
     Whitebox.setInternalState(AccumuloVFSClassLoader.class, "loader",
         (AccumuloReloadingVFSClassLoader) null);
diff --git a/start/src/test/java/org/apache/accumulo/start/classloader/vfs/ContextManagerTest.java b/start/src/test/java/org/apache/accumulo/start/classloader/vfs/ContextManagerTest.java
index 913ab43..2e9a209 100644
--- a/start/src/test/java/org/apache/accumulo/start/classloader/vfs/ContextManagerTest.java
+++ b/start/src/test/java/org/apache/accumulo/start/classloader/vfs/ContextManagerTest.java
@@ -114,11 +114,11 @@
     assertArrayEquals(createFileSystems(dirContents2), files2);
 
     Class<?> defaultContextClass = cl1.loadClass("test.HelloWorld");
-    Object o1 = defaultContextClass.newInstance();
+    Object o1 = defaultContextClass.getDeclaredConstructor().newInstance();
     assertEquals("Hello World!", o1.toString());
 
     Class<?> myContextClass = cl2.loadClass("test.HelloWorld");
-    Object o2 = myContextClass.newInstance();
+    Object o2 = myContextClass.getDeclaredConstructor().newInstance();
     assertEquals("Hello World!", o2.toString());
 
     assertNotEquals(defaultContextClass, myContextClass);
diff --git a/start/src/test/java/org/apache/accumulo/start/classloader/vfs/providers/VfsClassLoaderTest.java b/start/src/test/java/org/apache/accumulo/start/classloader/vfs/providers/VfsClassLoaderTest.java
index 2a66c49..2429ef5 100644
--- a/start/src/test/java/org/apache/accumulo/start/classloader/vfs/providers/VfsClassLoaderTest.java
+++ b/start/src/test/java/org/apache/accumulo/start/classloader/vfs/providers/VfsClassLoaderTest.java
@@ -62,7 +62,7 @@
   @Test
   public void testGetClass() throws Exception {
     Class<?> helloWorldClass = this.cl.loadClass("test.HelloWorld");
-    Object o = helloWorldClass.newInstance();
+    Object o = helloWorldClass.getDeclaredConstructor().newInstance();
     assertEquals("Hello World!", o.toString());
   }
 
diff --git a/test/pom.xml b/test/pom.xml
index a82a4df..4877a4c 100644
--- a/test/pom.xml
+++ b/test/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>2.0.1-SNAPSHOT</version>
+    <version>2.1.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-test</artifactId>
   <name>Apache Accumulo Testing</name>
diff --git a/test/src/main/java/org/apache/accumulo/test/AuditMessageIT.java b/test/src/main/java/org/apache/accumulo/test/AuditMessageIT.java
index c2a9b08..6612e06 100644
--- a/test/src/main/java/org/apache/accumulo/test/AuditMessageIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/AuditMessageIT.java
@@ -120,7 +120,7 @@
     for (File file : files) {
       // We want to grab the files called .out
       if (file.getName().contains(".out") && file.isFile() && file.canRead()) {
-        try (java.util.Scanner it = new java.util.Scanner(file, UTF_8.name())) {
+        try (java.util.Scanner it = new java.util.Scanner(file, UTF_8)) {
           while (it.hasNext()) {
             String line = it.nextLine();
             // strip off prefix, because log4j.properties does
@@ -326,7 +326,7 @@
     // Just grab the first rf file, it will do for now.
     String filePrefix = "file:";
 
-    try (java.util.Scanner it = new java.util.Scanner(distCpTxt, UTF_8.name())) {
+    try (java.util.Scanner it = new java.util.Scanner(distCpTxt, UTF_8)) {
       while (it.hasNext() && importFile == null) {
         String line = it.nextLine();
         if (line.matches(".*\\.rf")) {
diff --git a/test/src/main/java/org/apache/accumulo/test/BalanceFasterIT.java b/test/src/main/java/org/apache/accumulo/test/BalanceFasterIT.java
deleted file mode 100644
index 339b7d2..0000000
--- a/test/src/main/java/org/apache/accumulo/test/BalanceFasterIT.java
+++ /dev/null
@@ -1,113 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.test;
-
-import static org.apache.accumulo.fate.util.UtilWaitThread.sleepUninterruptibly;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assume.assumeFalse;
-
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.SortedSet;
-import java.util.TreeSet;
-import java.util.concurrent.TimeUnit;
-
-import org.apache.accumulo.core.client.Accumulo;
-import org.apache.accumulo.core.client.AccumuloClient;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.metadata.MetadataTable;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.miniclusterImpl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.test.categories.MiniClusterOnlyTests;
-import org.apache.accumulo.test.categories.PerformanceTests;
-import org.apache.accumulo.test.functional.ConfigurableMacBase;
-import org.apache.accumulo.test.mrit.IntegrationTestMapReduce;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.io.Text;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-
-// ACCUMULO-2952
-@Category({MiniClusterOnlyTests.class, PerformanceTests.class})
-public class BalanceFasterIT extends ConfigurableMacBase {
-
-  @Override
-  public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
-    cfg.setNumTservers(3);
-  }
-
-  @BeforeClass
-  public static void checkMR() {
-    assumeFalse(IntegrationTestMapReduce.isMapReduce());
-  }
-
-  @Test(timeout = 90 * 1000)
-  public void test() throws Exception {
-    // create a table, add a bunch of splits
-    String tableName = getUniqueNames(1)[0];
-    try (AccumuloClient client = Accumulo.newClient().from(getClientProperties()).build()) {
-      client.tableOperations().create(tableName);
-      SortedSet<Text> splits = new TreeSet<>();
-      for (int i = 0; i < 1000; i++) {
-        splits.add(new Text("" + i));
-      }
-      client.tableOperations().addSplits(tableName, splits);
-      // give a short wait for balancing
-      sleepUninterruptibly(10, TimeUnit.SECONDS);
-      // find out where the tablets are
-      Iterator<Integer> i;
-      try (Scanner s = client.createScanner(MetadataTable.NAME, Authorizations.EMPTY)) {
-        s.fetchColumnFamily(MetadataSchema.TabletsSection.CurrentLocationColumnFamily.NAME);
-        s.setRange(MetadataSchema.TabletsSection.getRange());
-        Map<String,Integer> counts = new HashMap<>();
-        while (true) {
-          int total = 0;
-          counts.clear();
-          for (Entry<Key,Value> kv : s) {
-            String host = kv.getValue().toString();
-            if (!counts.containsKey(host))
-              counts.put(host, 0);
-            counts.put(host, counts.get(host) + 1);
-            total++;
-          }
-          // are enough tablets online?
-          if (total > 1000)
-            break;
-        }
-
-        // should be on all three servers
-        assertEquals(3, counts.size());
-        // and distributed evenly
-        i = counts.values().iterator();
-      }
-
-      int a = i.next();
-      int b = i.next();
-      int c = i.next();
-      assertTrue(Math.abs(a - b) < 3);
-      assertTrue(Math.abs(a - c) < 3);
-      assertTrue(a > 330);
-    }
-  }
-}
diff --git a/test/src/main/java/org/apache/accumulo/test/BatchWriterIterator.java b/test/src/main/java/org/apache/accumulo/test/BatchWriterIterator.java
index bf5ab31..53916d5 100644
--- a/test/src/main/java/org/apache/accumulo/test/BatchWriterIterator.java
+++ b/test/src/main/java/org/apache/accumulo/test/BatchWriterIterator.java
@@ -29,7 +29,6 @@
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.MutationsRejectedException;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.clientImpl.ClientInfo;
@@ -42,6 +41,7 @@
 import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.WrappingIterator;
+import org.apache.accumulo.core.util.cleaner.CleanerUtil;
 import org.apache.accumulo.test.util.SerializationUtil;
 import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
@@ -61,6 +61,26 @@
 public class BatchWriterIterator extends WrappingIterator {
   private static final Logger log = LoggerFactory.getLogger(BatchWriterIterator.class);
 
+  private static final String OPT_sleepAfterFirstWrite = "sleepAfterFirstWrite";
+  private static final String OPT_numEntriesToWritePerEntry = "numEntriesToWritePerEntry";
+  private static final String OPT_batchWriterTimeout = "batchWriterTimeout";
+  private static final String OPT_batchWriterMaxMemory = "batchWriterMaxMemory";
+  private static final String OPT_clearCacheAfterFirstWrite = "clearCacheAfterFirstWrite";
+  private static final String OPT_splitAfterFirstWrite = "splitAfterFirstWrite";
+
+  private static final String ZOOKEEPERHOST = "zookeeperHost";
+  private static final String INSTANCENAME = "instanceName";
+  private static final String TABLENAME = "tableName";
+  private static final String USERNAME = "username";
+  private static final String ZOOKEEPERTIMEOUT = "zookeeperTimeout";
+  // base64 encoding of token
+  private static final String AUTHENTICATION_TOKEN = "authenticationToken";
+  // class of token
+  private static final String AUTHENTICATION_TOKEN_CLASS = "authenticationTokenClass";
+  private static final String SUCCESS_STRING = "success";
+
+  public static final Value SUCCESS_VALUE = new Value(SUCCESS_STRING.getBytes());
+
   private Map<String,String> originalOptions; // remembered for deepCopy
 
   private int sleepAfterFirstWrite = 0;
@@ -69,34 +89,17 @@
   private long batchWriterMaxMemory = 0;
   private boolean clearCacheAfterFirstWrite = false;
   private boolean splitAfterFirstWrite = false;
-
-  public static final String OPT_sleepAfterFirstWrite = "sleepAfterFirstWrite",
-      OPT_numEntriesToWritePerEntry = "numEntriesToWritePerEntry",
-      OPT_batchWriterTimeout = "batchWriterTimeout",
-      OPT_batchWriterMaxMemory = "batchWriterMaxMemory",
-      OPT_clearCacheAfterFirstWrite = "clearCacheAfterFirstWrite",
-      OPT_splitAfterFirstWrite = "splitAfterFirstWrite";
-
   private String instanceName;
   private String tableName;
   private String zookeeperHost;
   private int zookeeperTimeout = -1;
   private String username;
   private AuthenticationToken auth = null;
-
-  public static final String ZOOKEEPERHOST = "zookeeperHost", INSTANCENAME = "instanceName",
-      TABLENAME = "tableName", USERNAME = "username", ZOOKEEPERTIMEOUT = "zookeeperTimeout",
-      AUTHENTICATION_TOKEN = "authenticationToken", // base64 encoding of token
-      AUTHENTICATION_TOKEN_CLASS = "authenticationTokenClass"; // class of token
-
   private BatchWriter batchWriter;
   private boolean firstWrite = true;
   private Value topValue = null;
   private AccumuloClient accumuloClient;
 
-  public static final String SUCCESS_STRING = "success";
-  public static final Value SUCCESS_VALUE = new Value(SUCCESS_STRING.getBytes());
-
   public static IteratorSetting iteratorSetting(int priority, int sleepAfterFirstWrite,
       long batchWriterTimeout, long batchWriterMaxMemory, int numEntriesToWrite, String tableName,
       AccumuloClient accumuloClient, AuthenticationToken token, boolean clearCacheAfterFirstWrite,
@@ -167,13 +170,8 @@
   }
 
   private void initBatchWriter() {
-    try {
-      accumuloClient = Accumulo.newClient().to(instanceName, zookeeperHost).as(username, auth)
-          .zkTimeout(zookeeperTimeout).build();
-    } catch (Exception e) {
-      log.error("failed to connect to Accumulo instance " + instanceName, e);
-      throw new RuntimeException(e);
-    }
+    accumuloClient = Accumulo.newClient().to(instanceName, zookeeperHost).as(username, auth)
+        .zkTimeout(zookeeperTimeout).build();
 
     BatchWriterConfig bwc = new BatchWriterConfig();
     bwc.setMaxMemory(batchWriterMaxMemory);
@@ -183,8 +181,14 @@
       batchWriter = accumuloClient.createBatchWriter(tableName, bwc);
     } catch (TableNotFoundException e) {
       log.error(tableName + " does not exist in instance " + instanceName, e);
+      accumuloClient.close();
       throw new RuntimeException(e);
+    } catch (RuntimeException e) {
+      accumuloClient.close();
+      throw e;
     }
+    // this is dubious, but necessary since iterators aren't closeable
+    CleanerUtil.batchWriterAndClientCloser(this, log, batchWriter, accumuloClient);
   }
 
   /**
@@ -229,18 +233,6 @@
   }
 
   @Override
-  protected void finalize() throws Throwable {
-    super.finalize();
-    try {
-      batchWriter.close();
-    } catch (MutationsRejectedException e) {
-      log.error("Failed to close BatchWriter; some mutations may not be applied", e);
-    } finally {
-      accumuloClient.close();
-    }
-  }
-
-  @Override
   public void next() throws IOException {
     super.next();
     if (hasTop())
@@ -264,7 +256,7 @@
   public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
     BatchWriterIterator newInstance;
     try {
-      newInstance = this.getClass().newInstance();
+      newInstance = this.getClass().getDeclaredConstructor().newInstance();
       newInstance.init(getSource().deepCopy(env), originalOptions, env);
       return newInstance;
     } catch (Exception e) {
diff --git a/test/src/main/java/org/apache/accumulo/test/GetMasterStats.java b/test/src/main/java/org/apache/accumulo/test/GetMasterStats.java
index fcdec02..56ca506 100644
--- a/test/src/main/java/org/apache/accumulo/test/GetMasterStats.java
+++ b/test/src/main/java/org/apache/accumulo/test/GetMasterStats.java
@@ -41,7 +41,7 @@
   public static void main(String[] args) throws Exception {
     MasterClientService.Iface client = null;
     MasterMonitorInfo stats = null;
-    ServerContext context = new ServerContext(new SiteConfiguration());
+    var context = new ServerContext(SiteConfiguration.auto());
     while (true) {
       try {
         client = MasterClient.getConnectionWithRetry(context);
@@ -51,8 +51,9 @@
         // Let it loop, fetching a new location
         sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
       } finally {
-        if (client != null)
+        if (client != null) {
           MasterClient.close(client);
+        }
       }
     }
     out(0, "State: " + stats.state.name());
@@ -109,8 +110,9 @@
         out(2, "Time Difference: %.1f", ((now - server.lastContact) / 1000.));
         out(2, "Total Records: %d", summary.recs);
         out(2, "Lookups: %d", server.lookups);
-        if (server.holdTime > 0)
+        if (server.holdTime > 0) {
           out(2, "Hold Time: %d", server.holdTime);
+        }
         if (server.tableMap != null && server.tableMap.size() > 0) {
           out(2, "Tables");
           for (Entry<String,TableInfo> status : server.tableMap.entrySet()) {
diff --git a/test/src/main/java/org/apache/accumulo/test/HardListIterator.java b/test/src/main/java/org/apache/accumulo/test/HardListIterator.java
index ae0489e..f521165 100644
--- a/test/src/main/java/org/apache/accumulo/test/HardListIterator.java
+++ b/test/src/main/java/org/apache/accumulo/test/HardListIterator.java
@@ -74,7 +74,7 @@
   public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
     HardListIterator newInstance;
     try {
-      newInstance = HardListIterator.class.newInstance();
+      newInstance = HardListIterator.class.getDeclaredConstructor().newInstance();
     } catch (Exception e) {
       throw new RuntimeException(e);
     }
diff --git a/test/src/main/java/org/apache/accumulo/test/InMemoryMapIT.java b/test/src/main/java/org/apache/accumulo/test/InMemoryMapIT.java
index bff1ffe..08b1d23 100644
--- a/test/src/main/java/org/apache/accumulo/test/InMemoryMapIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/InMemoryMapIT.java
@@ -59,8 +59,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.collect.ImmutableSet;
-
 /**
  * Integration Test for https://issues.apache.org/jira/browse/ACCUMULO-4148
  * <p>
@@ -272,8 +270,9 @@
         localityGroupMapWithNative.getMapType());
 
     int count = 0;
-    for (Mutation m : mutations)
+    for (Mutation m : mutations) {
       count += m.size();
+    }
     defaultMap.mutate(mutations, count);
     nativeMapWrapper.mutate(mutations, count);
     localityGroupMap.mutate(mutations, count);
@@ -368,7 +367,7 @@
     for (MemKey m : memKeys) {
       kvCounts.add(m.getKVCount());
     }
-    return ImmutableSet.copyOf(kvCounts).size();
+    return Set.copyOf(kvCounts).size();
   }
 
   private ConfigurationCopy updateConfigurationForLocalityGroups(ConfigurationCopy configuration) {
diff --git a/test/src/main/java/org/apache/accumulo/test/LocatorIT.java b/test/src/main/java/org/apache/accumulo/test/LocatorIT.java
index f721612..79c1655 100644
--- a/test/src/main/java/org/apache/accumulo/test/LocatorIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/LocatorIT.java
@@ -45,9 +45,6 @@
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableSet;
-
 public class LocatorIT extends AccumuloClusterHarness {
 
   @Override
@@ -56,7 +53,7 @@
   }
 
   private void assertContains(Locations locations, HashSet<String> tservers,
-      Map<Range,ImmutableSet<TabletId>> expected1, Map<TabletId,ImmutableSet<Range>> expected2) {
+      Map<Range,Set<TabletId>> expected1, Map<TabletId,Set<Range>> expected2) {
 
     Map<Range,Set<TabletId>> gbr = new HashMap<>();
     for (Entry<Range,List<TabletId>> entry : locations.groupByRange().entrySet()) {
@@ -107,23 +104,20 @@
 
       ranges.add(r1);
       Locations ret = client.tableOperations().locate(tableName, ranges);
-      assertContains(ret, tservers, ImmutableMap.of(r1, ImmutableSet.of(t1)),
-          ImmutableMap.of(t1, ImmutableSet.of(r1)));
+      assertContains(ret, tservers, Map.of(r1, Set.of(t1)), Map.of(t1, Set.of(r1)));
 
       ranges.add(r2);
       ret = client.tableOperations().locate(tableName, ranges);
-      assertContains(ret, tservers,
-          ImmutableMap.of(r1, ImmutableSet.of(t1), r2, ImmutableSet.of(t1)),
-          ImmutableMap.of(t1, ImmutableSet.of(r1, r2)));
+      assertContains(ret, tservers, Map.of(r1, Set.of(t1), r2, Set.of(t1)),
+          Map.of(t1, Set.of(r1, r2)));
 
       TreeSet<Text> splits = new TreeSet<>();
       splits.add(new Text("r"));
       client.tableOperations().addSplits(tableName, splits);
 
       ret = client.tableOperations().locate(tableName, ranges);
-      assertContains(ret, tservers,
-          ImmutableMap.of(r1, ImmutableSet.of(t2), r2, ImmutableSet.of(t2, t3)),
-          ImmutableMap.of(t2, ImmutableSet.of(r1, r2), t3, ImmutableSet.of(r2)));
+      assertContains(ret, tservers, Map.of(r1, Set.of(t2), r2, Set.of(t2, t3)),
+          Map.of(t2, Set.of(r1, r2), t3, Set.of(r2)));
 
       client.tableOperations().offline(tableName, true);
 
diff --git a/test/src/main/java/org/apache/accumulo/test/ManySplitIT.java b/test/src/main/java/org/apache/accumulo/test/ManySplitIT.java
deleted file mode 100644
index 9b30c17..0000000
--- a/test/src/main/java/org/apache/accumulo/test/ManySplitIT.java
+++ /dev/null
@@ -1,113 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.test;
-
-import static java.nio.charset.StandardCharsets.UTF_8;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assume.assumeFalse;
-
-import java.util.SortedSet;
-import java.util.TreeSet;
-import java.util.concurrent.atomic.AtomicBoolean;
-
-import org.apache.accumulo.core.client.Accumulo;
-import org.apache.accumulo.core.client.AccumuloClient;
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.metadata.MetadataTable;
-import org.apache.accumulo.fate.util.UtilWaitThread;
-import org.apache.accumulo.minicluster.MemoryUnit;
-import org.apache.accumulo.minicluster.ServerType;
-import org.apache.accumulo.miniclusterImpl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.test.categories.MiniClusterOnlyTests;
-import org.apache.accumulo.test.categories.PerformanceTests;
-import org.apache.accumulo.test.functional.ConfigurableMacBase;
-import org.apache.accumulo.test.mrit.IntegrationTestMapReduce;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.io.Text;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-
-@Category({MiniClusterOnlyTests.class, PerformanceTests.class})
-public class ManySplitIT extends ConfigurableMacBase {
-
-  final int SPLITS = 10_000;
-
-  @BeforeClass
-  public static void checkMR() {
-    assumeFalse(IntegrationTestMapReduce.isMapReduce());
-  }
-
-  @Test(timeout = 4 * 60 * 1000)
-  public void test() throws Exception {
-    assumeFalse(IntegrationTestMapReduce.isMapReduce());
-
-    final String tableName = getUniqueNames(1)[0];
-
-    try (AccumuloClient client = Accumulo.newClient().from(getClientProperties()).build()) {
-
-      log.info("Creating table");
-      log.info("splitting metadata table");
-      client.tableOperations().create(tableName);
-      SortedSet<Text> splits = new TreeSet<>();
-      for (byte b : "123456789abcde".getBytes(UTF_8)) {
-        splits.add(new Text(new byte[] {'1', ';', b}));
-      }
-      client.tableOperations().addSplits(MetadataTable.NAME, splits);
-      splits.clear();
-      for (int i = 0; i < SPLITS; i++) {
-        splits.add(new Text(Integer.toHexString(i)));
-      }
-      log.info("Adding splits");
-      // print out the number of splits so we have some idea of what's going on
-      final AtomicBoolean stop = new AtomicBoolean(false);
-      Thread t = new Thread() {
-        @Override
-        public void run() {
-          while (!stop.get()) {
-            UtilWaitThread.sleep(1000);
-            try {
-              log.info("splits: " + client.tableOperations().listSplits(tableName).size());
-            } catch (TableNotFoundException | AccumuloException | AccumuloSecurityException e) {
-              // TODO Auto-generated catch block
-              e.printStackTrace();
-            }
-          }
-        }
-      };
-      t.start();
-      long now = System.currentTimeMillis();
-      client.tableOperations().addSplits(tableName, splits);
-      long diff = System.currentTimeMillis() - now;
-      double splitsPerSec = SPLITS / (diff / 1000.);
-      log.info("Done: {} splits per second", splitsPerSec);
-      assertTrue("splits created too slowly", splitsPerSec > 100);
-      stop.set(true);
-      t.join();
-    }
-  }
-
-  @Override
-  protected void configure(MiniAccumuloConfigImpl cfg, Configuration hdfs) {
-    cfg.setNumTservers(1);
-    cfg.setMemory(ServerType.TABLET_SERVER, cfg.getMemory(ServerType.TABLET_SERVER) * 2,
-        MemoryUnit.BYTE);
-  }
-
-}
diff --git a/test/src/main/java/org/apache/accumulo/test/NewTableConfigurationIT.java b/test/src/main/java/org/apache/accumulo/test/NewTableConfigurationIT.java
index c2a9891..848bbbf 100644
--- a/test/src/main/java/org/apache/accumulo/test/NewTableConfigurationIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/NewTableConfigurationIT.java
@@ -43,8 +43,6 @@
 import org.junit.BeforeClass;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableSet;
-
 public class NewTableConfigurationIT extends SharedMiniClusterBase {
 
   @Override
@@ -102,8 +100,8 @@
   public void testOverlappingGroupsFail() {
     NewTableConfiguration ntc = new NewTableConfiguration();
     Map<String,Set<Text>> lgroups = new HashMap<>();
-    lgroups.put("lg1", ImmutableSet.of(new Text("colFamA"), new Text("colFamB")));
-    lgroups.put("lg2", ImmutableSet.of(new Text("colFamC"), new Text("colFamB")));
+    lgroups.put("lg1", Set.of(new Text("colFamA"), new Text("colFamB")));
+    lgroups.put("lg2", Set.of(new Text("colFamC"), new Text("colFamB")));
     ntc.setLocalityGroups(lgroups);
   }
 
@@ -118,8 +116,8 @@
       NewTableConfiguration ntc = new NewTableConfiguration();
       // set locality groups map
       Map<String,Set<Text>> lgroups = new HashMap<>();
-      lgroups.put("lg1", ImmutableSet.of(new Text("dog"), new Text("cat")));
-      lgroups.put("lg2", ImmutableSet.of(new Text("lion"), new Text("tiger")));
+      lgroups.put("lg1", Set.of(new Text("dog"), new Text("cat")));
+      lgroups.put("lg2", Set.of(new Text("lion"), new Text("tiger")));
       // set groups via NewTableConfiguration
       ntc.setLocalityGroups(lgroups);
       client.tableOperations().create(tableName, ntc);
@@ -127,10 +125,8 @@
       Map<String,Set<Text>> createdLocalityGroups =
           client.tableOperations().getLocalityGroups(tableName);
       assertEquals(2, createdLocalityGroups.size());
-      assertEquals(createdLocalityGroups.get("lg1"),
-          ImmutableSet.of(new Text("dog"), new Text("cat")));
-      assertEquals(createdLocalityGroups.get("lg2"),
-          ImmutableSet.of(new Text("lion"), new Text("tiger")));
+      assertEquals(createdLocalityGroups.get("lg1"), Set.of(new Text("dog"), new Text("cat")));
+      assertEquals(createdLocalityGroups.get("lg2"), Set.of(new Text("lion"), new Text("tiger")));
     }
   }
 
@@ -145,19 +141,18 @@
       NewTableConfiguration ntc = new NewTableConfiguration();
       // set first locality groups map
       Map<String,Set<Text>> initalGroup = new HashMap<>();
-      initalGroup.put("lg1", ImmutableSet.of(new Text("dog"), new Text("cat")));
+      initalGroup.put("lg1", Set.of(new Text("dog"), new Text("cat")));
       ntc.setLocalityGroups(initalGroup);
       // set a second locality groups map and set in method call
       Map<String,Set<Text>> secondGroup = new HashMap<>();
-      secondGroup.put("lg1", ImmutableSet.of(new Text("blue"), new Text("red")));
+      secondGroup.put("lg1", Set.of(new Text("blue"), new Text("red")));
       ntc.setLocalityGroups(secondGroup);
       client.tableOperations().create(tableName, ntc);
       // verify
       Map<String,Set<Text>> createdLocalityGroups =
           client.tableOperations().getLocalityGroups(tableName);
       assertEquals(1, createdLocalityGroups.size());
-      assertEquals(createdLocalityGroups.get("lg1"),
-          ImmutableSet.of(new Text("red"), new Text("blue")));
+      assertEquals(createdLocalityGroups.get("lg1"), Set.of(new Text("red"), new Text("blue")));
     }
   }
 
@@ -177,7 +172,7 @@
       ntc.setProperties(props);
 
       Map<String,Set<Text>> lgroups = new HashMap<>();
-      lgroups.put("lg1", ImmutableSet.of(new Text("dog")));
+      lgroups.put("lg1", Set.of(new Text("dog")));
       ntc.setLocalityGroups(lgroups);
       client.tableOperations().create(tableName, ntc);
       // verify
@@ -204,7 +199,7 @@
       Map<String,Set<Text>> createdLocalityGroups =
           client.tableOperations().getLocalityGroups(tableName);
       assertEquals(1, createdLocalityGroups.size());
-      assertEquals(createdLocalityGroups.get("lg1"), ImmutableSet.of(new Text("dog")));
+      assertEquals(createdLocalityGroups.get("lg1"), Set.of(new Text("dog")));
     }
   }
 
@@ -238,14 +233,14 @@
       NewTableConfiguration ntc = new NewTableConfiguration().withoutDefaultIterators();
 
       Map<String,Set<Text>> lgroups = new HashMap<>();
-      lgroups.put("lg1", ImmutableSet.of(new Text("colF")));
+      lgroups.put("lg1", Set.of(new Text("colF")));
       ntc.setLocalityGroups(lgroups);
       client.tableOperations().create(tableName, ntc);
       // verify groups and verify no iterators
       Map<String,Set<Text>> createdLocalityGroups =
           client.tableOperations().getLocalityGroups(tableName);
       assertEquals(1, createdLocalityGroups.size());
-      assertEquals(createdLocalityGroups.get("lg1"), ImmutableSet.of(new Text("colF")));
+      assertEquals(createdLocalityGroups.get("lg1"), Set.of(new Text("colF")));
       Map<String,EnumSet<IteratorScope>> iterators =
           client.tableOperations().listIterators(tableName);
       assertEquals(0, iterators.size());
@@ -526,7 +521,7 @@
       props.put(Property.TABLE_ARBITRARY_PROP_PREFIX.getKey() + "prop1", "val1");
       ntc.setProperties(props);
       Map<String,Set<Text>> lgroups = new HashMap<>();
-      lgroups.put("lg1", ImmutableSet.of(new Text("colF")));
+      lgroups.put("lg1", Set.of(new Text("colF")));
       ntc.setLocalityGroups(lgroups);
       client.tableOperations().create(tableName, ntc);
       // verify user table properties
@@ -542,7 +537,7 @@
       Map<String,Set<Text>> createdLocalityGroups =
           client.tableOperations().getLocalityGroups(tableName);
       assertEquals(1, createdLocalityGroups.size());
-      assertEquals(createdLocalityGroups.get("lg1"), ImmutableSet.of(new Text("colF")));
+      assertEquals(createdLocalityGroups.get("lg1"), Set.of(new Text("colF")));
       // verify iterators
       verifyIterators(client, tableName, new String[] {"table.iterator.scan.someName=10,foo.bar"},
           true);
@@ -564,7 +559,7 @@
       IteratorSetting setting =
           new IteratorSetting(10, "anIterator", "it.class", Collections.emptyMap());
       Map<String,Set<Text>> lgroups = new HashMap<>();
-      lgroups.put("lgp", ImmutableSet.of(new Text("col")));
+      lgroups.put("lgp", Set.of(new Text("col")));
 
       NewTableConfiguration ntc = new NewTableConfiguration().withoutDefaultIterators()
           .attachIterator(setting, EnumSet.of(IteratorScope.scan)).setLocalityGroups(lgroups);
@@ -597,7 +592,7 @@
       Map<String,Set<Text>> createdLocalityGroups =
           client.tableOperations().getLocalityGroups(tableName);
       assertEquals(1, createdLocalityGroups.size());
-      assertEquals(createdLocalityGroups.get("lgp"), ImmutableSet.of(new Text("col")));
+      assertEquals(createdLocalityGroups.get("lgp"), Set.of(new Text("col")));
     }
   }
 
@@ -609,7 +604,7 @@
     NewTableConfiguration ntc = new NewTableConfiguration();
 
     Map<String,Set<Text>> lgroups = new HashMap<>();
-    lgroups.put("lg1", ImmutableSet.of(new Text("dog")));
+    lgroups.put("lg1", Set.of(new Text("dog")));
     ntc.setLocalityGroups(lgroups);
 
     Map<String,String> props = new HashMap<>();
@@ -630,7 +625,7 @@
     ntc.setProperties(props);
 
     Map<String,Set<Text>> lgroups = new HashMap<>();
-    lgroups.put("lg1", ImmutableSet.of(new Text("dog")));
+    lgroups.put("lg1", Set.of(new Text("dog")));
     ntc.setLocalityGroups(lgroups);
   }
 
diff --git a/test/src/main/java/org/apache/accumulo/test/SampleIT.java b/test/src/main/java/org/apache/accumulo/test/SampleIT.java
index 7fc164e..10cf38d 100644
--- a/test/src/main/java/org/apache/accumulo/test/SampleIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/SampleIT.java
@@ -66,15 +66,14 @@
 import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.Iterables;
 
 public class SampleIT extends AccumuloClusterHarness {
 
   private static final Map<String,String> OPTIONS_1 =
-      ImmutableMap.of("hasher", "murmur3_32", "modulus", "1009");
+      Map.of("hasher", "murmur3_32", "modulus", "1009");
   private static final Map<String,String> OPTIONS_2 =
-      ImmutableMap.of("hasher", "murmur3_32", "modulus", "997");
+      Map.of("hasher", "murmur3_32", "modulus", "997");
 
   private static final SamplerConfiguration SC1 =
       new SamplerConfiguration(RowSampler.class.getName()).setOptions(OPTIONS_1);
diff --git a/test/src/main/java/org/apache/accumulo/test/ShellServerIT.java b/test/src/main/java/org/apache/accumulo/test/ShellServerIT.java
index c8e5de5..d3acf36 100644
--- a/test/src/main/java/org/apache/accumulo/test/ShellServerIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/ShellServerIT.java
@@ -17,6 +17,7 @@
 package org.apache.accumulo.test;
 
 import static java.nio.file.Files.newBufferedReader;
+import static java.util.Objects.requireNonNull;
 import static org.apache.accumulo.fate.util.UtilWaitThread.sleepUninterruptibly;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
@@ -94,7 +95,6 @@
 import org.apache.accumulo.test.functional.SlowIterator;
 import org.apache.accumulo.tracer.TraceServer;
 import org.apache.commons.io.FileUtils;
-import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -2037,7 +2037,7 @@
 
     log.debug("countFiles(): {}", ts.output.get());
 
-    String[] lines = StringUtils.split(ts.output.get(), "\n");
+    String[] lines = ts.output.get().split("\n");
     ts.output.clear();
 
     if (lines.length == 0) {
@@ -2816,13 +2816,13 @@
   }
 
   private static String encode(final Text text, final boolean encode) {
-    if (StringUtils.isBlank(text.toString()))
+    if (text.toString().isBlank())
       return null;
     return encode ? Base64.getEncoder().encodeToString(TextUtil.getBytes(text)) : text.toString();
   }
 
   private Text decode(final String text, final boolean decode) {
-    if (StringUtils.isBlank(text))
+    if (requireNonNull(text).isBlank())
       return null;
     return decode ? new Text(Base64.getDecoder().decode(text)) : new Text(text);
   }
diff --git a/test/src/main/java/org/apache/accumulo/test/ThriftServerBindsBeforeZooKeeperLockIT.java b/test/src/main/java/org/apache/accumulo/test/ThriftServerBindsBeforeZooKeeperLockIT.java
index e5bf812..a8db630 100644
--- a/test/src/main/java/org/apache/accumulo/test/ThriftServerBindsBeforeZooKeeperLockIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/ThriftServerBindsBeforeZooKeeperLockIT.java
@@ -22,6 +22,7 @@
 import java.net.URL;
 import java.util.Collection;
 import java.util.List;
+import java.util.Map;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.Accumulo;
@@ -44,8 +45,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.collect.ImmutableMap;
-
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 
 /**
@@ -286,8 +285,7 @@
         throw new IllegalArgumentException("Irrelevant server type for test");
     }
 
-    return cluster
-        ._exec(service, serverType, ImmutableMap.of(property.getKey(), Integer.toString(port)))
+    return cluster._exec(service, serverType, Map.of(property.getKey(), Integer.toString(port)))
         .getProcess();
   }
 }
diff --git a/test/src/main/java/org/apache/accumulo/test/TransportCachingIT.java b/test/src/main/java/org/apache/accumulo/test/TransportCachingIT.java
index e73792e..6b95d2a 100644
--- a/test/src/main/java/org/apache/accumulo/test/TransportCachingIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/TransportCachingIT.java
@@ -135,12 +135,35 @@
           // Get a non-cached transport
           third = pool.getAnyTransport(servers, false).getSecond();
         } catch (TTransportException e) {
-          log.warn("Failed obtain 2nd transport to {}", servers);
+          log.warn("Failed obtain 3rd transport to {}", servers);
         }
       }
 
       assertNotSame("Expected second and third transport to be different instances", second, third);
       pool.returnTransport(third);
+
+      // ensure the LIFO scheme with a fourth and fifth entry
+      TTransport fourth = null;
+      while (fourth == null) {
+        try {
+          // Get a non-cached transport
+          fourth = pool.getAnyTransport(servers, false).getSecond();
+        } catch (TTransportException e) {
+          log.warn("Failed obtain 4th transport to {}", servers);
+        }
+      }
+      pool.returnTransport(fourth);
+      TTransport fifth = null;
+      while (fifth == null) {
+        try {
+          // Get a cached transport
+          fifth = pool.getAnyTransport(servers, true).getSecond();
+        } catch (TTransportException e) {
+          log.warn("Failed obtain 5th transport to {}", servers);
+        }
+      }
+      assertSame("Expected fourth and fifth transport to be the same instance", fourth, fifth);
+      pool.returnTransport(fifth);
     }
   }
 }
diff --git a/test/src/main/java/org/apache/accumulo/test/UserCompactionStrategyIT.java b/test/src/main/java/org/apache/accumulo/test/UserCompactionStrategyIT.java
index 07d83b2..8285dee 100644
--- a/test/src/main/java/org/apache/accumulo/test/UserCompactionStrategyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/UserCompactionStrategyIT.java
@@ -59,9 +59,6 @@
 import org.junit.Assume;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableSet;
-
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 
 public class UserCompactionStrategyIT extends AccumuloClusterHarness {
@@ -97,17 +94,17 @@
       // drop files that start with A
       CompactionStrategyConfig csConfig =
           new CompactionStrategyConfig(TestCompactionStrategy.class.getName());
-      csConfig.setOptions(ImmutableMap.of("dropPrefix", "A", "inputPrefix", "F"));
+      csConfig.setOptions(Map.of("dropPrefix", "A", "inputPrefix", "F"));
       c.tableOperations().compact(tableName,
           new CompactionConfig().setWait(true).setCompactionStrategy(csConfig));
 
-      assertEquals(ImmutableSet.of("c", "d"), getRows(c, tableName));
+      assertEquals(Set.of("c", "d"), getRows(c, tableName));
 
       // this compaction should not drop files starting with A
       c.tableOperations().compact(tableName, new CompactionConfig().setWait(true));
       c.tableOperations().compact(tableName, new CompactionConfig().setWait(true));
 
-      assertEquals(ImmutableSet.of("c", "d"), getRows(c, tableName));
+      assertEquals(Set.of("c", "d"), getRows(c, tableName));
     }
   }
 
@@ -127,7 +124,7 @@
       c.tableOperations().compact(tableName,
           new CompactionConfig().setWait(true).setCompactionStrategy(csConfig));
 
-      assertEquals(ImmutableSet.of("a", "b"), getRows(c, tableName));
+      assertEquals(Set.of("a", "b"), getRows(c, tableName));
     }
   }
 
@@ -136,7 +133,7 @@
     // test a compaction strategy that selects no files. In this case there is no work to do, want
     // to ensure it does not hang.
 
-    testDropNone(ImmutableMap.of("inputPrefix", "Z"));
+    testDropNone(Map.of("inputPrefix", "Z"));
   }
 
   @Test
@@ -145,7 +142,7 @@
     // shouldCompact() will return true and getCompactionPlan() will
     // return no work to do.
 
-    testDropNone(ImmutableMap.of("inputPrefix", "Z", "shouldCompact", "true"));
+    testDropNone(Map.of("inputPrefix", "Z", "shouldCompact", "true"));
   }
 
   @Test
@@ -220,7 +217,7 @@
       // drop files that start with A
       CompactionStrategyConfig csConfig =
           new CompactionStrategyConfig(TestCompactionStrategy.class.getName());
-      csConfig.setOptions(ImmutableMap.of("inputPrefix", "F"));
+      csConfig.setOptions(Map.of("inputPrefix", "F"));
 
       IteratorSetting iterConf = new IteratorSetting(21, "myregex", RegExFilter.class);
       RegExFilter.setRegexs(iterConf, "a|c", null, null, null, false);
@@ -231,14 +228,14 @@
       // compaction strategy should only be applied to one file. If its applied to both, then row
       // 'b'
       // would be dropped by filter.
-      assertEquals(ImmutableSet.of("a", "b", "c"), getRows(c, tableName));
+      assertEquals(Set.of("a", "b", "c"), getRows(c, tableName));
 
       assertEquals(2, FunctionalTestUtils.countRFiles(c, tableName));
 
       c.tableOperations().compact(tableName, new CompactionConfig().setWait(true));
 
       // ensure that iterator is not applied
-      assertEquals(ImmutableSet.of("a", "b", "c"), getRows(c, tableName));
+      assertEquals(Set.of("a", "b", "c"), getRows(c, tableName));
 
       assertEquals(1, FunctionalTestUtils.countRFiles(c, tableName));
     }
@@ -263,14 +260,14 @@
 
       CompactionStrategyConfig csConfig =
           new CompactionStrategyConfig(SizeCompactionStrategy.class.getName());
-      csConfig.setOptions(ImmutableMap.of("size", "" + (1 << 15)));
+      csConfig.setOptions(Map.of("size", "" + (1 << 15)));
       c.tableOperations().compact(tableName,
           new CompactionConfig().setWait(true).setCompactionStrategy(csConfig));
 
       assertEquals(3, FunctionalTestUtils.countRFiles(c, tableName));
 
       csConfig = new CompactionStrategyConfig(SizeCompactionStrategy.class.getName());
-      csConfig.setOptions(ImmutableMap.of("size", "" + (1 << 17)));
+      csConfig.setOptions(Map.of("size", "" + (1 << 17)));
       c.tableOperations().compact(tableName,
           new CompactionConfig().setWait(true).setCompactionStrategy(csConfig));
 
@@ -308,8 +305,9 @@
       try {
         // this compaction should fail because previous one set iterators
         c.tableOperations().compact(tableName, new CompactionConfig().setWait(true));
-        if (System.currentTimeMillis() - t1 < 2000)
+        if (System.currentTimeMillis() - t1 < 2000) {
           fail("Expected compaction to fail because another concurrent compaction set iterators");
+        }
       } catch (AccumuloException e) {}
     }
   }
@@ -331,8 +329,9 @@
   private Set<String> getRows(AccumuloClient c, String tableName) throws TableNotFoundException {
     Set<String> rows = new HashSet<>();
     try (Scanner scanner = c.createScanner(tableName, Authorizations.EMPTY)) {
-      for (Entry<Key,Value> entry : scanner)
+      for (Entry<Key,Value> entry : scanner) {
         rows.add(entry.getKey().getRowData().toString());
+      }
     }
     return rows;
   }
diff --git a/test/src/main/java/org/apache/accumulo/test/VolumeIT.java b/test/src/main/java/org/apache/accumulo/test/VolumeIT.java
index de3b6e0..1e9e01e 100644
--- a/test/src/main/java/org/apache/accumulo/test/VolumeIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/VolumeIT.java
@@ -16,7 +16,6 @@
  */
 package org.apache.accumulo.test;
 
-import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -45,6 +44,7 @@
 import org.apache.accumulo.core.client.admin.DiskUsage;
 import org.apache.accumulo.core.client.admin.NewTableConfiguration;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.clientImpl.ClientContext;
 import org.apache.accumulo.core.conf.ClientProperty;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.Key;
@@ -59,8 +59,6 @@
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.DataFileColumnFamily;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.fate.zookeeper.ZooReader;
-import org.apache.accumulo.fate.zookeeper.ZooUtil;
 import org.apache.accumulo.miniclusterImpl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.server.ServerConstants;
 import org.apache.accumulo.server.init.Initialize;
@@ -479,10 +477,9 @@
       verifyVolumesUsed(client, tableNames[0], true, v2);
 
       // check that root tablet is not on volume 1
-      ZooReader zreader = new ZooReader(cluster.getZooKeepers(), 30000);
-      String zpath = ZooUtil.getRoot(client.instanceOperations().getInstanceID())
-          + RootTable.ZROOT_TABLET_PATH;
-      String rootTabletDir = new String(zreader.getData(zpath, false, null), UTF_8);
+      String rootTabletDir =
+          ((ClientContext) client).getAmple().readTablet(RootTable.EXTENT).getDir();
+
       assertTrue(rootTabletDir.startsWith(v2.toString()));
 
       client.tableOperations().clone(tableNames[0], tableNames[1], true, new HashMap<>(),
@@ -544,10 +541,8 @@
     verifyVolumesUsed(client, tableNames[1], true, v8, v9);
 
     // check that root tablet is not on volume 1 or 2
-    ZooReader zreader = new ZooReader(cluster.getZooKeepers(), 30000);
-    String zpath =
-        ZooUtil.getRoot(client.instanceOperations().getInstanceID()) + RootTable.ZROOT_TABLET_PATH;
-    String rootTabletDir = new String(zreader.getData(zpath, false, null), UTF_8);
+    String rootTabletDir =
+        ((ClientContext) client).getAmple().readTablet(RootTable.EXTENT).getDir();
     assertTrue(rootTabletDir.startsWith(v8.toString()) || rootTabletDir.startsWith(v9.toString()));
 
     client.tableOperations().clone(tableNames[1], tableNames[2], true, new HashMap<>(),
diff --git a/test/src/main/java/org/apache/accumulo/test/YieldScannersIT.java b/test/src/main/java/org/apache/accumulo/test/YieldScannersIT.java
index 1f98163..354556d 100644
--- a/test/src/main/java/org/apache/accumulo/test/YieldScannersIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/YieldScannersIT.java
@@ -37,7 +37,6 @@
 import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.miniclusterImpl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.test.functional.YieldingIterator;
-import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
@@ -101,7 +100,7 @@
             yieldNextCount++;
             yieldSeekCount++;
           }
-          String[] value = StringUtils.split(next.getValue().toString(), ',');
+          String[] value = next.getValue().toString().split(",");
           assertEquals("Unexpected yield next count", Integer.toString(yieldNextCount), value[0]);
           assertEquals("Unexpected yield seek count", Integer.toString(yieldSeekCount), value[1]);
           assertEquals("Unexpected rebuild count",
@@ -157,7 +156,7 @@
             yieldNextCount++;
             yieldSeekCount++;
           }
-          String[] value = StringUtils.split(next.getValue().toString(), ',');
+          String[] value = next.getValue().toString().split(",");
           assertEquals("Unexpected yield next count", Integer.toString(yieldNextCount), value[0]);
           assertEquals("Unexpected yield seek count", Integer.toString(yieldSeekCount), value[1]);
           assertEquals("Unexpected rebuild count",
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/BulkFailureIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BulkFailureIT.java
index 18d8182..3f21352 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/BulkFailureIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BulkFailureIT.java
@@ -76,12 +76,9 @@
 import org.apache.zookeeper.KeeperException;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableSet;
-
 public class BulkFailureIT extends AccumuloClusterHarness {
 
-  static interface Loader {
+  interface Loader {
     void load(long txid, ClientContext context, KeyExtent extent, Path path, long size,
         boolean expectFailure) throws Exception;
   }
@@ -136,8 +133,8 @@
       // Directly ask the tablet to load the file.
       loader.load(fateTxid, asCtx, extent, bulkLoadPath, status.getLen(), false);
 
-      assertEquals(ImmutableSet.of(bulkLoadPath), getFiles(c, extent));
-      assertEquals(ImmutableSet.of(bulkLoadPath), getLoaded(c, extent));
+      assertEquals(Set.of(bulkLoadPath), getFiles(c, extent));
+      assertEquals(Set.of(bulkLoadPath), getLoaded(c, extent));
       assertEquals(testData, readTable(table, c));
 
       // Compact the bulk imported file. Subsequent request to load the file should be ignored.
@@ -146,14 +143,14 @@
       Set<Path> tabletFiles = getFiles(c, extent);
       assertFalse(tabletFiles.contains(bulkLoadPath));
       assertEquals(1, tabletFiles.size());
-      assertEquals(ImmutableSet.of(bulkLoadPath), getLoaded(c, extent));
+      assertEquals(Set.of(bulkLoadPath), getLoaded(c, extent));
       assertEquals(testData, readTable(table, c));
 
       // this request should be ignored by the tablet
       loader.load(fateTxid, asCtx, extent, bulkLoadPath, status.getLen(), false);
 
       assertEquals(tabletFiles, getFiles(c, extent));
-      assertEquals(ImmutableSet.of(bulkLoadPath), getLoaded(c, extent));
+      assertEquals(Set.of(bulkLoadPath), getLoaded(c, extent));
       assertEquals(testData, readTable(table, c));
 
       // this is done to ensure the tablet reads the load flags from the metadata table when it
@@ -165,7 +162,7 @@
       loader.load(fateTxid, asCtx, extent, bulkLoadPath, status.getLen(), false);
 
       assertEquals(tabletFiles, getFiles(c, extent));
-      assertEquals(ImmutableSet.of(bulkLoadPath), getLoaded(c, extent));
+      assertEquals(Set.of(bulkLoadPath), getLoaded(c, extent));
       assertEquals(testData, readTable(table, c));
 
       // After this, all load request should fail.
@@ -182,7 +179,7 @@
       loader.load(fateTxid, asCtx, extent, bulkLoadPath, status.getLen(), true);
 
       assertEquals(tabletFiles, getFiles(c, extent));
-      assertEquals(ImmutableSet.of(), getLoaded(c, extent));
+      assertEquals(Set.of(), getLoaded(c, extent));
       assertEquals(testData, readTable(table, c));
     }
   }
@@ -257,8 +254,8 @@
     TabletClientService.Iface client = getClient(context, extent);
     try {
 
-      Map<String,MapFileInfo> val = ImmutableMap.of(path.toString(), new MapFileInfo(size));
-      Map<KeyExtent,Map<String,MapFileInfo>> files = ImmutableMap.of(extent, val);
+      Map<String,MapFileInfo> val = Map.of(path.toString(), new MapFileInfo(size));
+      Map<KeyExtent,Map<String,MapFileInfo>> files = Map.of(extent, val);
 
       client.bulkImport(TraceUtil.traceInfo(), context.rpcCreds(), txid,
           Translator.translate(files, Translators.KET), false);
@@ -266,8 +263,9 @@
         fail("Expected RPC to fail");
       }
     } catch (TApplicationException tae) {
-      if (!expectFailure)
+      if (!expectFailure) {
         throw tae;
+      }
     } finally {
       ThriftUtil.returnClient((TServiceClient) client);
     }
@@ -279,8 +277,8 @@
     TabletClientService.Iface client = getClient(context, extent);
     try {
 
-      Map<String,MapFileInfo> val = ImmutableMap.of(path.getName(), new MapFileInfo(size));
-      Map<KeyExtent,Map<String,MapFileInfo>> files = ImmutableMap.of(extent, val);
+      Map<String,MapFileInfo> val = Map.of(path.getName(), new MapFileInfo(size));
+      Map<KeyExtent,Map<String,MapFileInfo>> files = Map.of(extent, val);
 
       client.loadFiles(TraceUtil.traceInfo(), context.rpcCreds(), txid, path.getParent().toString(),
           Translator.translate(files, Translators.KET), false);
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/BulkNewIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BulkNewIT.java
index 95e9b2f..1c65b88 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/BulkNewIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BulkNewIT.java
@@ -17,6 +17,9 @@
 package org.apache.accumulo.test.functional;
 
 import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.FILES;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.LOADED;
+import static org.apache.accumulo.core.metadata.schema.TabletMetadata.ColumnType.PREV_ROW;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
@@ -75,9 +78,6 @@
 import org.junit.BeforeClass;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableSet;
-
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 
 /**
@@ -138,8 +138,9 @@
       throws Exception {
     addSplits(c, tableName, "0333");
 
-    if (offline)
+    if (offline) {
       c.tableOperations().offline(tableName);
+    }
 
     String dir = getDir("/testSingleTabletSingleFileNoSplits-");
 
@@ -147,12 +148,12 @@
 
     c.tableOperations().importDirectory(dir).to(tableName).tableTime(setTime).load();
 
-    if (offline)
+    if (offline) {
       c.tableOperations().online(tableName);
+    }
 
     verifyData(c, tableName, 0, 332, setTime);
-    verifyMetadata(c, tableName,
-        ImmutableMap.of("0333", ImmutableSet.of(h1), "null", ImmutableSet.of()));
+    verifyMetadata(c, tableName, Map.of("0333", Set.of(h1), "null", Set.of()));
   }
 
   @Test
@@ -183,8 +184,9 @@
 
   private void testSingleTabletSingleFileNoSplits(AccumuloClient c, boolean offline)
       throws Exception {
-    if (offline)
+    if (offline) {
       c.tableOperations().offline(tableName);
+    }
 
     String dir = getDir("/testSingleTabletSingleFileNoSplits-");
 
@@ -192,11 +194,12 @@
 
     c.tableOperations().importDirectory(dir).to(tableName).load();
 
-    if (offline)
+    if (offline) {
       c.tableOperations().online(tableName);
+    }
 
     verifyData(c, tableName, 0, 333, false);
-    verifyMetadata(c, tableName, ImmutableMap.of("null", ImmutableSet.of(h1)));
+    verifyMetadata(c, tableName, Map.of("null", Set.of(h1)));
   }
 
   @Test
@@ -230,8 +233,9 @@
       } catch (Exception e) {
         Throwable cause = e.getCause();
         if (!(cause instanceof FileNotFoundException)
-            && !(cause.getCause() instanceof FileNotFoundException))
+            && !(cause.getCause() instanceof FileNotFoundException)) {
           fail("Expected FileNotFoundException but threw " + e.getCause());
+        }
       } finally {
         fs.setPermission(rFilePath, originalPerms);
       }
@@ -241,8 +245,9 @@
       try {
         c.tableOperations().importDirectory(dir).to(tableName).load();
       } catch (AccumuloException ae) {
-        if (!(ae.getCause() instanceof FileNotFoundException))
+        if (!(ae.getCause() instanceof FileNotFoundException)) {
           fail("Expected FileNotFoundException but threw " + ae.getCause());
+        }
       } finally {
         fs.setPermission(new Path(dir), originalPerms);
       }
@@ -253,8 +258,9 @@
     try (AccumuloClient c = Accumulo.newClient().from(getClientProps()).build()) {
       addSplits(c, tableName, "0333 0666 0999 1333 1666");
 
-      if (offline)
+      if (offline) {
         c.tableOperations().offline(tableName);
+      }
 
       String dir = getDir("/testBulkFile-");
 
@@ -297,8 +303,9 @@
         c.tableOperations().importDirectory(dir).to(tableName).load();
       }
 
-      if (offline)
+      if (offline) {
         c.tableOperations().online(tableName);
+      }
 
       verifyData(c, tableName, 0, 1999, false);
       verifyMetadata(c, tableName, hashes);
@@ -380,8 +387,9 @@
   private void addSplits(AccumuloClient client, String tableName, String splitString)
       throws Exception {
     SortedSet<Text> splits = new TreeSet<>();
-    for (String split : splitString.split(" "))
+    for (String split : splitString.split(" ")) {
       splits.add(new Text(split));
+    }
     client.tableOperations().addSplits(tableName, splits);
   }
 
@@ -392,25 +400,30 @@
       Iterator<Entry<Key,Value>> iter = scanner.iterator();
 
       for (int i = start; i <= end; i++) {
-        if (!iter.hasNext())
+        if (!iter.hasNext()) {
           throw new Exception("row " + i + " not found");
+        }
 
         Entry<Key,Value> entry = iter.next();
 
         String row = String.format("%04d", i);
 
-        if (!entry.getKey().getRow().equals(new Text(row)))
+        if (!entry.getKey().getRow().equals(new Text(row))) {
           throw new Exception("unexpected row " + entry.getKey() + " " + i);
+        }
 
-        if (Integer.parseInt(entry.getValue().toString()) != i)
+        if (Integer.parseInt(entry.getValue().toString()) != i) {
           throw new Exception("unexpected value " + entry + " " + i);
+        }
 
-        if (setTime)
+        if (setTime) {
           assertEquals(1L, entry.getKey().getTimestamp());
+        }
       }
 
-      if (iter.hasNext())
+      if (iter.hasNext()) {
         throw new Exception("found more than expected " + iter.next());
+      }
     }
   }
 
@@ -420,8 +433,8 @@
     Set<String> endRowsSeen = new HashSet<>();
 
     String id = client.tableOperations().tableIdMap().get(tableName);
-    try (TabletsMetadata tablets = TabletsMetadata.builder().forTable(TableId.of(id)).fetchFiles()
-        .fetchLoaded().fetchPrev().build(client)) {
+    try (TabletsMetadata tablets = TabletsMetadata.builder().forTable(TableId.of(id))
+        .fetch(FILES, LOADED, PREV_ROW).build(client)) {
       for (TabletMetadata tablet : tablets) {
         assertTrue(tablet.getLoaded().isEmpty());
 
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/CacheTestClean.java b/test/src/main/java/org/apache/accumulo/test/functional/CacheTestClean.java
index 3ac49c1..99dd7c3 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/CacheTestClean.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CacheTestClean.java
@@ -34,7 +34,7 @@
     String rootDir = args[0];
     File reportDir = new File(args[1]);
 
-    SiteConfiguration siteConfig = new SiteConfiguration();
+    var siteConfig = SiteConfiguration.auto();
     IZooReaderWriter zoo = new ZooReaderWriter(siteConfig);
 
     if (zoo.exists(rootDir)) {
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/CacheTestWriter.java b/test/src/main/java/org/apache/accumulo/test/functional/CacheTestWriter.java
index f8e473d..25bcab3 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/CacheTestWriter.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CacheTestWriter.java
@@ -32,7 +32,6 @@
 import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.conf.SiteConfiguration;
-import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.fate.zookeeper.ZooReaderWriter;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeMissingPolicy;
@@ -46,7 +45,7 @@
   @SuppressFBWarnings(value = {"PATH_TRAVERSAL_IN", "OBJECT_DESERIALIZATION"},
       justification = "path provided by test; object deserialization is okay for test")
   public static void main(String[] args) throws Exception {
-    IZooReaderWriter zk = new ZooReaderWriter(new SiteConfiguration());
+    var zk = new ZooReaderWriter(SiteConfiguration.auto());
 
     String rootDir = args[0];
     File reportDir = new File(args[1]);
@@ -159,8 +158,9 @@
             }
           }
 
-          if (ok)
+          if (ok) {
             break;
+          }
         }
 
         sleepUninterruptibly(5, TimeUnit.MILLISECONDS);
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/DeleteTableDuringSplitIT.java b/test/src/main/java/org/apache/accumulo/test/functional/DeleteTableDuringSplitIT.java
deleted file mode 100644
index 2cc58ee..0000000
--- a/test/src/main/java/org/apache/accumulo/test/functional/DeleteTableDuringSplitIT.java
+++ /dev/null
@@ -1,119 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.test.functional;
-
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-
-import java.util.ArrayList;
-import java.util.Iterator;
-import java.util.List;
-import java.util.SortedSet;
-import java.util.TreeSet;
-import java.util.concurrent.Future;
-
-import org.apache.accumulo.core.client.Accumulo;
-import org.apache.accumulo.core.client.AccumuloClient;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.util.SimpleThreadPool;
-import org.apache.accumulo.fate.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterHarness;
-import org.apache.accumulo.test.categories.PerformanceTests;
-import org.apache.accumulo.test.categories.StandaloneCapableClusterTests;
-import org.apache.hadoop.io.Text;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-
-// ACCUMULO-2361
-@Category({StandaloneCapableClusterTests.class, PerformanceTests.class})
-public class DeleteTableDuringSplitIT extends AccumuloClusterHarness {
-
-  @Override
-  protected int defaultTimeoutSeconds() {
-    return 15 * 60;
-  }
-
-  @Test
-  public void test() throws Exception {
-
-    try (AccumuloClient client = Accumulo.newClient().from(getClientProps()).build()) {
-
-      // 96 invocations, 8 at a time
-      int batches = 12, batchSize = 8;
-      String[] tableNames = getUniqueNames(batches * batchSize);
-      // make a bunch of tables
-      for (String tableName : tableNames) {
-        client.tableOperations().create(tableName);
-      }
-      final SortedSet<Text> splits = new TreeSet<>();
-      for (byte i = 0; i < 100; i++) {
-        splits.add(new Text(new byte[] {0, 0, i}));
-      }
-
-      List<Future<?>> results = new ArrayList<>();
-      List<Runnable> tasks = new ArrayList<>();
-      SimpleThreadPool es = new SimpleThreadPool(batchSize * 2, "concurrent-api-requests");
-      for (String tableName : tableNames) {
-        final String finalName = tableName;
-        tasks.add(new Runnable() {
-          @Override
-          public void run() {
-            try {
-              client.tableOperations().addSplits(finalName, splits);
-            } catch (TableNotFoundException ex) {
-              // expected, ignore
-            } catch (Exception ex) {
-              throw new RuntimeException(finalName, ex);
-            }
-          }
-        });
-        tasks.add(new Runnable() {
-          @Override
-          public void run() {
-            try {
-              UtilWaitThread.sleep(500);
-              client.tableOperations().delete(finalName);
-            } catch (Exception ex) {
-              throw new RuntimeException(ex);
-            }
-          }
-        });
-      }
-      Iterator<Runnable> itr = tasks.iterator();
-      for (int batch = 0; batch < batches; batch++) {
-        for (int i = 0; i < batchSize; i++) {
-          Future<?> f = es.submit(itr.next());
-          results.add(f);
-          f = es.submit(itr.next());
-          results.add(f);
-        }
-        for (Future<?> f : results) {
-          f.get();
-        }
-        results.clear();
-      }
-      // Shut down the ES
-      List<Runnable> queued = es.shutdownNow();
-      assertTrue("Had more tasks to run", queued.isEmpty());
-      assertFalse("Had more tasks that needed to be submitted", itr.hasNext());
-      for (String tableName : tableNames) {
-        assertFalse(client.tableOperations().exists(tableName));
-      }
-    }
-  }
-
-}
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/DurabilityIT.java b/test/src/main/java/org/apache/accumulo/test/functional/DurabilityIT.java
index 496e2d1..83a5aac 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/DurabilityIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/DurabilityIT.java
@@ -36,7 +36,6 @@
 import org.apache.accumulo.miniclusterImpl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.miniclusterImpl.ProcessReference;
 import org.apache.accumulo.test.categories.MiniClusterOnlyTests;
-import org.apache.accumulo.test.categories.PerformanceTests;
 import org.apache.accumulo.test.mrit.IntegrationTestMapReduce;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.RawLocalFileSystem;
@@ -46,7 +45,7 @@
 
 import com.google.common.collect.Iterators;
 
-@Category({MiniClusterOnlyTests.class, PerformanceTests.class})
+@Category({MiniClusterOnlyTests.class})
 public class DurabilityIT extends ConfigurableMacBase {
 
   @Override
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/GarbageCollectorIT.java b/test/src/main/java/org/apache/accumulo/test/functional/GarbageCollectorIT.java
index 1856efb..ae9936f 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/GarbageCollectorIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/GarbageCollectorIT.java
@@ -57,6 +57,7 @@
 import org.apache.accumulo.miniclusterImpl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.miniclusterImpl.ProcessNotFoundException;
 import org.apache.accumulo.miniclusterImpl.ProcessReference;
+import org.apache.accumulo.server.metadata.ServerAmpleImpl;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.accumulo.test.VerifyIngest;
 import org.apache.accumulo.test.VerifyIngest.VerifyParams;
@@ -186,7 +187,7 @@
   }
 
   private Mutation createDelMutation(String path, String cf, String cq, String val) {
-    Text row = new Text(MetadataSchema.DeletesSection.getRowPrefix() + path);
+    Text row = new Text(MetadataSchema.DeletesSection.encodeRow(path));
     Mutation delFlag = new Mutation(row);
     delFlag.put(cf, cq, val);
     return delFlag;
@@ -213,7 +214,11 @@
       try (BatchWriter bw = c.createBatchWriter(MetadataTable.NAME)) {
         bw.addMutation(createDelMutation("", "", "", ""));
         bw.addMutation(createDelMutation("", "testDel", "test", "valueTest"));
-        bw.addMutation(createDelMutation("/", "", "", ""));
+        // path is invalid but value is expected - only way the invalid entry will come through
+        // processing and
+        // show up to produce error in output to allow while loop to end
+        bw.addMutation(
+            createDelMutation("/", "", "", MetadataSchema.DeletesSection.SkewedKeyValue.STR_NAME));
       }
 
       ProcessInfo gc = cluster.exec(SimpleGarbageCollector.class);
@@ -297,18 +302,15 @@
     return Iterators.size(Arrays.asList(cluster.getFileSystem().globStatus(path)).iterator());
   }
 
-  public static void addEntries(AccumuloClient client) throws Exception {
+  private void addEntries(AccumuloClient client) throws Exception {
     client.securityOperations().grantTablePermission(client.whoami(), MetadataTable.NAME,
         TablePermission.WRITE);
     try (BatchWriter bw = client.createBatchWriter(MetadataTable.NAME)) {
       for (int i = 0; i < 100000; ++i) {
-        final Text emptyText = new Text("");
-        Text row =
-            new Text(String.format("%s/%020d/%s", MetadataSchema.DeletesSection.getRowPrefix(), i,
-                "aaaaaaaaaabbbbbbbbbbccccccccccddddddddddeeeeeeeeee"
-                    + "ffffffffffgggggggggghhhhhhhhhhiiiiiiiiiijjjjjjjjjj"));
-        Mutation delFlag = new Mutation(row);
-        delFlag.put(emptyText, emptyText, new Value(new byte[] {}));
+        String longpath = "aaaaaaaaaabbbbbbbbbbccccccccccddddddddddeeeeeeeeee"
+            + "ffffffffffgggggggggghhhhhhhhhhiiiiiiiiiijjjjjjjjjj";
+        Mutation delFlag = ServerAmpleImpl.createDeleteMutation(getServerContext(),
+            MetadataTable.ID, String.format("/%020d/%s", i, longpath));
         bw.addMutation(delFlag);
       }
     }
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/GcMetricsIT.java b/test/src/main/java/org/apache/accumulo/test/functional/GcMetricsIT.java
new file mode 100644
index 0000000..eab1956
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/functional/GcMetricsIT.java
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.functional;
+
+import static org.junit.Assert.assertTrue;
+
+import java.util.Collections;
+import java.util.Map;
+import java.util.TreeMap;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.accumulo.core.client.Accumulo;
+import org.apache.accumulo.core.client.AccumuloClient;
+import org.apache.accumulo.gc.metrics.GcMetrics;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.accumulo.test.metrics.MetricsFileTailer;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Functional test that uses a hadoop metrics 2 file sink to read published metrics for
+ * verification.
+ */
+public class GcMetricsIT extends AccumuloClusterHarness {
+
+  private static final Logger log = LoggerFactory.getLogger(GcMetricsIT.class);
+
+  private AccumuloClient accumuloClient;
+
+  private static final int NUM_TAIL_ATTEMPTS = 20;
+  private static final long TAIL_DELAY = 5_000;
+
+  private static final String[] EXPECTED_METRIC_KEYS = new String[] {"AccGcCandidates",
+      "AccGcDeleted", "AccGcErrors", "AccGcFinished", "AccGcInUse", "AccGcPostOpDuration",
+      "AccGcRunCycleCount", "AccGcStarted", "AccGcWalCandidates", "AccGcWalDeleted",
+      "AccGcWalErrors", "AccGcWalFinished", "AccGcWalInUse", "AccGcWalStarted"};
+
+  @Before
+  public void setup() {
+    accumuloClient = Accumulo.newClient().from(getClientProps()).build();
+  }
+
+  @Override
+  protected int defaultTimeoutSeconds() {
+    return 4 * 60;
+  }
+
+  @Test
+  public void gcMetricsPublished() {
+
+    log.trace("Client started, properties:{}", accumuloClient.properties());
+
+    MetricsFileTailer gcTail = new MetricsFileTailer("accumulo.sink.file-gc");
+    Thread t1 = new Thread(gcTail);
+    t1.start();
+
+    try {
+
+      long testStart = System.currentTimeMillis();
+
+      LineUpdate firstUpdate = waitForUpdate(-1, gcTail);
+
+      Map<String,Long> firstSeenMap = parseLine(firstUpdate.getLine());
+
+      log.trace("L:{}", firstUpdate.getLine());
+      log.trace("M:{}", firstSeenMap);
+
+      assertTrue(lookForExpectedKeys(firstSeenMap));
+      sanity(testStart, firstSeenMap);
+
+      LineUpdate nextUpdate = waitForUpdate(firstUpdate.getLastUpdate(), gcTail);
+
+      Map<String,Long> updateSeenMap = parseLine(nextUpdate.getLine());
+
+      log.debug("Line received:{}", nextUpdate.getLine());
+      log.trace("Mapped values:{}", updateSeenMap);
+
+      assertTrue(lookForExpectedKeys(updateSeenMap));
+      sanity(testStart, updateSeenMap);
+
+      validate(firstSeenMap, updateSeenMap);
+
+    } catch (Exception ex) {
+      log.debug("reads", ex);
+    }
+  }
+
+  /**
+   * Validate metrics for consistency withing a run cycle.
+   *
+   * @param values
+   *          map of values from one run cycle.
+   */
+  private void sanity(final long testStart, final Map<String,Long> values) {
+
+    long start = values.get("AccGcStarted");
+    long finished = values.get("AccGcFinished");
+    assertTrue(start >= testStart);
+    assertTrue(finished >= start);
+
+    start = values.get("AccGcWalStarted");
+    finished = values.get("AccGcWalFinished");
+    assertTrue(start >= testStart);
+    assertTrue(finished >= start);
+
+  }
+
+  /**
+   * A series of sanity checks for the metrics between different update cycles, some values should
+   * be at least different, and some of the checks can include ordering.
+   *
+   * @param firstSeen
+   *          map of first metric update
+   * @param nextSeen
+   *          map of a later metric update.
+   */
+  private void validate(Map<String,Long> firstSeen, Map<String,Long> nextSeen) {
+    assertTrue(nextSeen.get("AccGcStarted") > firstSeen.get("AccGcStarted"));
+    assertTrue(nextSeen.get("AccGcFinished") > firstSeen.get("AccGcWalStarted"));
+    assertTrue(nextSeen.get("AccGcRunCycleCount") > firstSeen.get("AccGcRunCycleCount"));
+  }
+
+  /**
+   * The hadoop metrics file sink published records as a line with comma separated key=value pairs.
+   * This method parses the line and extracts the key, value pair from metrics that start with AccGc
+   * and returns them in a sort map.
+   *
+   * @param line
+   *          a line from the metrics system file sink.
+   * @return a map of the metrics that start with AccGc
+   */
+  private Map<String,Long> parseLine(final String line) {
+
+    if (line == null) {
+      return Collections.emptyMap();
+    }
+
+    Map<String,Long> m = new TreeMap<>();
+
+    String[] csvTokens = line.split(",");
+
+    for (String token : csvTokens) {
+      token = token.trim();
+      if (token.startsWith(GcMetrics.GC_METRIC_PREFIX)) {
+        String[] parts = token.split("=");
+        m.put(parts[0], Long.parseLong(parts[1]));
+      }
+    }
+    return m;
+  }
+
+  private static class LineUpdate {
+    private final long lastUpdate;
+    private final String line;
+
+    LineUpdate(long lastUpdate, String line) {
+      this.lastUpdate = lastUpdate;
+      this.line = line;
+    }
+
+    long getLastUpdate() {
+      return lastUpdate;
+    }
+
+    String getLine() {
+      return line;
+    }
+  }
+
+  private LineUpdate waitForUpdate(final long prevUpdate, final MetricsFileTailer tail) {
+
+    for (int count = 0; count < NUM_TAIL_ATTEMPTS; count++) {
+
+      String line = tail.getLast();
+      long currUpdate = tail.getLastUpdate();
+
+      if (line != null && (currUpdate != prevUpdate)) {
+        return new LineUpdate(tail.getLastUpdate(), line);
+      }
+
+      try {
+        Thread.sleep(TAIL_DELAY);
+      } catch (InterruptedException ex) {
+        Thread.currentThread().interrupt();
+        throw new IllegalStateException(ex);
+      }
+    }
+    // not found - throw exception.
+    throw new IllegalStateException(
+        String.format("File source update not received after %d tries in %d sec", NUM_TAIL_ATTEMPTS,
+            TimeUnit.MILLISECONDS.toSeconds(TAIL_DELAY * NUM_TAIL_ATTEMPTS)));
+  }
+
+  private boolean lookForExpectedKeys(final Map<String,Long> received) {
+
+    for (String e : EXPECTED_METRIC_KEYS) {
+      if (!received.containsKey(e)) {
+        return false;
+      }
+    }
+
+    return true;
+  }
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/MasterAssignmentIT.java b/test/src/main/java/org/apache/accumulo/test/functional/MasterAssignmentIT.java
index cd1af04..2414677 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/MasterAssignmentIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/MasterAssignmentIT.java
@@ -27,6 +27,7 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.TableId;
+import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
 import org.apache.accumulo.fate.util.UtilWaitThread;
 import org.apache.accumulo.harness.AccumuloClusterHarness;
@@ -88,7 +89,7 @@
 
   private TabletLocationState getTabletLocationState(AccumuloClient c, String tableId) {
     try (MetaDataTableScanner s = new MetaDataTableScanner((ClientContext) c,
-        new Range(TabletsSection.getRow(TableId.of(tableId), null)))) {
+        new Range(TabletsSection.getRow(TableId.of(tableId), null)), MetadataTable.NAME)) {
       return s.next();
     }
   }
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/SplitRecoveryIT.java b/test/src/main/java/org/apache/accumulo/test/functional/SplitRecoveryIT.java
index 704299b..aaadfdc 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/SplitRecoveryIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/SplitRecoveryIT.java
@@ -18,6 +18,7 @@
 
 import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
 
 import java.util.ArrayList;
 import java.util.Collection;
@@ -32,6 +33,7 @@
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.admin.TimeType;
 import org.apache.accumulo.core.clientImpl.ScannerImpl;
 import org.apache.accumulo.core.clientImpl.Writer;
 import org.apache.accumulo.core.conf.SiteConfiguration;
@@ -45,6 +47,8 @@
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.DataFileColumnFamily;
+import org.apache.accumulo.core.metadata.schema.MetadataTime;
+import org.apache.accumulo.core.metadata.schema.TabletMetadata;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.ColumnFQ;
 import org.apache.accumulo.fate.zookeeper.ZooLock;
@@ -57,11 +61,9 @@
 import org.apache.accumulo.server.fs.FileRef;
 import org.apache.accumulo.server.master.state.Assignment;
 import org.apache.accumulo.server.master.state.TServerInstance;
-import org.apache.accumulo.server.tablets.TabletTime;
 import org.apache.accumulo.server.util.MasterMetadataUtil;
 import org.apache.accumulo.server.util.MetadataTableUtil;
 import org.apache.accumulo.server.zookeeper.TransactionWatcher;
-import org.apache.accumulo.tserver.TabletServer;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
@@ -147,7 +149,7 @@
 
       String tdir =
           ServerConstants.getTablesDirs(context)[0] + "/" + extent.getTableId() + "/dir_" + i;
-      MetadataTableUtil.addTablet(extent, tdir, context, TabletTime.LOGICAL_TIME_ID, zl);
+      MetadataTableUtil.addTablet(extent, tdir, context, TimeType.LOGICAL, zl);
       SortedMap<FileRef,DataFileValue> mapFiles = new TreeMap<>();
       mapFiles.put(new FileRef(tdir + "/" + RFile.EXTENSION + "_000_000"),
           new DataFileValue(1000017 + i, 10000 + i));
@@ -157,7 +159,8 @@
       }
       int tid = 0;
       TransactionWatcher.ZooArbitrator.start(context, Constants.BULK_ARBITRATOR_TYPE, tid);
-      MetadataTableUtil.updateTabletDataFile(tid, extent, mapFiles, "L0", context, zl);
+      MetadataTableUtil.updateTabletDataFile(tid, extent, mapFiles,
+          new MetadataTime(0, TimeType.LOGICAL), context, zl);
     }
 
     KeyExtent extent = extents[extentToSplit];
@@ -169,6 +172,17 @@
         "localhost:1234", failPoint, zl);
   }
 
+  private static Map<Long,List<FileRef>> getBulkFilesLoaded(ServerContext context,
+      KeyExtent extent) {
+    Map<Long,List<FileRef>> bulkFiles = new HashMap<>();
+
+    context.getAmple().readTablet(extent).getLoaded()
+        .forEach((path, txid) -> bulkFiles.computeIfAbsent(txid, k -> new ArrayList<FileRef>())
+            .add(new FileRef(context.getVolumeManager(), path, extent.getTableId())));
+
+    return bulkFiles;
+  }
+
   private void splitPartiallyAndRecover(ServerContext context, KeyExtent extent, KeyExtent high,
       KeyExtent low, double splitRatio, SortedMap<FileRef,DataFileValue> mapFiles, Text midRow,
       String location, int steps, ZooLock zl) throws Exception {
@@ -189,26 +203,28 @@
     writer.update(m);
 
     if (steps >= 1) {
-      Map<Long,? extends Collection<FileRef>> bulkFiles =
-          MetadataTableUtil.getBulkFilesLoaded(context, extent);
+      Map<Long,List<FileRef>> bulkFiles = getBulkFilesLoaded(context, extent);
+
       MasterMetadataUtil.addNewTablet(context, low, "/lowDir", instance, lowDatafileSizes,
-          bulkFiles, TabletTime.LOGICAL_TIME_ID + "0", -1L, -1L, zl);
+          bulkFiles, new MetadataTime(0, TimeType.LOGICAL), -1L, -1L, zl);
     }
     if (steps >= 2) {
       MetadataTableUtil.finishSplit(high, highDatafileSizes, highDatafilesToRemove, context, zl);
     }
 
-    TabletServer.verifyTabletInformation(context, high, instance, new TreeMap<>(), "127.0.0.1:0",
-        zl);
+    TabletMetadata meta = context.getAmple().readTablet(high);
+    KeyExtent fixedExtent = MasterMetadataUtil.fixSplit(context, meta, zl);
+
+    if (steps < 2)
+      assertEquals(splitRatio, meta.getSplitRatio(), 0.0);
 
     if (steps >= 1) {
+      assertEquals(high, fixedExtent);
       ensureTabletHasNoUnexpectedMetadataEntries(context, low, lowDatafileSizes);
       ensureTabletHasNoUnexpectedMetadataEntries(context, high, highDatafileSizes);
 
-      Map<Long,? extends Collection<FileRef>> lowBulkFiles =
-          MetadataTableUtil.getBulkFilesLoaded(context, low);
-      Map<Long,? extends Collection<FileRef>> highBulkFiles =
-          MetadataTableUtil.getBulkFilesLoaded(context, high);
+      Map<Long,? extends Collection<FileRef>> lowBulkFiles = getBulkFilesLoaded(context, low);
+      Map<Long,? extends Collection<FileRef>> highBulkFiles = getBulkFilesLoaded(context, high);
 
       if (!lowBulkFiles.equals(highBulkFiles)) {
         throw new Exception(" " + lowBulkFiles + " != " + highBulkFiles + " " + low + " " + high);
@@ -218,6 +234,7 @@
         throw new Exception(" no bulk files " + low);
       }
     } else {
+      assertEquals(extent, fixedExtent);
       ensureTabletHasNoUnexpectedMetadataEntries(context, extent, mapFiles);
     }
   }
@@ -241,14 +258,25 @@
       expectedColumnFamilies.add(TabletsSection.BulkFileColumnFamily.NAME);
 
       Iterator<Entry<Key,Value>> iter = scanner.iterator();
+
+      boolean sawPer = false;
+
       while (iter.hasNext()) {
-        Key key = iter.next().getKey();
+        Entry<Key,Value> entry = iter.next();
+        Key key = entry.getKey();
 
         if (!key.getRow().equals(extent.getMetadataEntry())) {
           throw new Exception(
               "Tablet " + extent + " contained unexpected " + MetadataTable.NAME + " entry " + key);
         }
 
+        if (TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.hasColumns(key)) {
+          sawPer = true;
+          if (!new KeyExtent(key.getRow(), entry.getValue()).equals(extent)) {
+            throw new Exception("Unexpected prev end row " + entry);
+          }
+        }
+
         if (expectedColumnFamilies.contains(key.getColumnFamily())) {
           continue;
         }
@@ -261,13 +289,14 @@
             "Tablet " + extent + " contained unexpected " + MetadataTable.NAME + " entry " + key);
       }
 
-      System.out.println("expectedColumns " + expectedColumns);
       if (expectedColumns.size() > 1 || (expectedColumns.size() == 1)) {
         throw new Exception("Not all expected columns seen " + extent + " " + expectedColumns);
       }
 
+      assertTrue(sawPer);
+
       SortedMap<FileRef,DataFileValue> fixedMapFiles =
-          MetadataTableUtil.getDataFileSizes(extent, context);
+          MetadataTableUtil.getFileAndLogEntries(context, extent).getSecond();
       verifySame(expectedMapFiles, fixedMapFiles);
     }
   }
@@ -292,7 +321,7 @@
   }
 
   public static void main(String[] args) throws Exception {
-    new SplitRecoveryIT().run(new ServerContext(new SiteConfiguration()));
+    new SplitRecoveryIT().run(new ServerContext(SiteConfiguration.auto()));
   }
 
   @Test
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/SummaryIT.java b/test/src/main/java/org/apache/accumulo/test/functional/SummaryIT.java
index 45acbc7..67844f6 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/SummaryIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/SummaryIT.java
@@ -89,8 +89,6 @@
 import org.junit.Test;
 
 import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableMap.Builder;
-import com.google.common.collect.ImmutableSet;
 import com.google.common.collect.Iterables;
 import com.google.common.collect.Lists;
 
@@ -712,7 +710,7 @@
   }
 
   private Map<String,Long> nm(Object... entries) {
-    Builder<String,Long> imb = ImmutableMap.builder();
+    var imb = ImmutableMap.<String,Long>builder();
     for (int i = 0; i < entries.length; i += 2) {
       imb.put((String) entries[i], (Long) entries[i + 1]);
     }
@@ -729,10 +727,10 @@
       ntc.enableSummarization(sc1, sc2);
 
       Map<String,Set<Text>> lgroups = new HashMap<>();
-      lgroups.put("lg1", ImmutableSet.of(new Text("chocolate"), new Text("coffee")));
-      lgroups.put("lg2", ImmutableSet.of(new Text(" broccoli "), new Text("cabbage")));
+      lgroups.put("lg1", Set.of(new Text("chocolate"), new Text("coffee")));
+      lgroups.put("lg2", Set.of(new Text(" broccoli "), new Text("cabbage")));
       // create a locality group that will not have data in it
-      lgroups.put("lg3", ImmutableSet.of(new Text(" apple "), new Text("orange")));
+      lgroups.put("lg3", Set.of(new Text(" apple "), new Text("orange")));
 
       ntc.setLocalityGroups(lgroups);
       c.tableOperations().create(table, ntc);
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/ZombieTServer.java b/test/src/main/java/org/apache/accumulo/test/functional/ZombieTServer.java
index 79a9291..3f6f03a 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/ZombieTServer.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ZombieTServer.java
@@ -101,7 +101,7 @@
     Random random = new SecureRandom();
     random.setSeed(System.currentTimeMillis() % 1000);
     int port = random.nextInt(30000) + 2000;
-    ServerContext context = new ServerContext(new SiteConfiguration());
+    var context = new ServerContext(SiteConfiguration.auto());
     TransactionWatcher watcher = new TransactionWatcher(context);
     final ThriftClientHandler tch = new ThriftClientHandler(context, watcher);
     Processor<Iface> processor = new Processor<>(tch);
diff --git a/test/src/main/java/org/apache/accumulo/test/master/SuspendedTabletsIT.java b/test/src/main/java/org/apache/accumulo/test/master/SuspendedTabletsIT.java
index 9d37515..b6d887f 100644
--- a/test/src/main/java/org/apache/accumulo/test/master/SuspendedTabletsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/master/SuspendedTabletsIT.java
@@ -38,7 +38,6 @@
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.FutureTask;
-import java.util.concurrent.ThreadFactory;
 import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicInteger;
 
@@ -50,6 +49,7 @@
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.dataImpl.KeyExtent;
+import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.util.HostAndPort;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.miniclusterImpl.MiniAccumuloConfigImpl;
@@ -67,7 +67,6 @@
 import org.slf4j.LoggerFactory;
 
 import com.google.common.collect.HashMultimap;
-import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.SetMultimap;
 
 public class SuspendedTabletsIT extends ConfigurableMacBase {
@@ -91,19 +90,15 @@
   public void crashAndResumeTserver() throws Exception {
     // Run the test body. When we get to the point where we need a tserver to go away, get rid of it
     // via crashing
-    suspensionTestBody(new TServerKiller() {
-      @Override
-      public void eliminateTabletServers(ClientContext ctx, TabletLocations locs, int count)
-          throws Exception {
-        List<ProcessReference> procs =
-            new ArrayList<>(getCluster().getProcesses().get(ServerType.TABLET_SERVER));
-        Collections.shuffle(procs);
+    suspensionTestBody((ctx, locs, count) -> {
+      List<ProcessReference> procs =
+          new ArrayList<>(getCluster().getProcesses().get(ServerType.TABLET_SERVER));
+      Collections.shuffle(procs);
 
-        for (int i = 0; i < count; ++i) {
-          ProcessReference pr = procs.get(i);
-          log.info("Crashing {}", pr.getProcess());
-          getCluster().killProcess(ServerType.TABLET_SERVER, pr);
-        }
+      for (int i = 0; i < count; ++i) {
+        ProcessReference pr = procs.get(i);
+        log.info("Crashing {}", pr.getProcess());
+        getCluster().killProcess(ServerType.TABLET_SERVER, pr);
       }
     });
   }
@@ -112,49 +107,45 @@
   public void shutdownAndResumeTserver() throws Exception {
     // Run the test body. When we get to the point where we need tservers to go away, stop them via
     // a clean shutdown.
-    suspensionTestBody(new TServerKiller() {
-      @Override
-      public void eliminateTabletServers(final ClientContext ctx, TabletLocations locs, int count)
-          throws Exception {
-        Set<TServerInstance> tserversSet = new HashSet<>();
-        for (TabletLocationState tls : locs.locationStates.values()) {
-          if (tls.current != null) {
-            tserversSet.add(tls.current);
-          }
+    suspensionTestBody((ctx, locs, count) -> {
+      Set<TServerInstance> tserversSet = new HashSet<>();
+      for (TabletLocationState tls : locs.locationStates.values()) {
+        if (tls.current != null) {
+          tserversSet.add(tls.current);
         }
-        List<TServerInstance> tserversList = new ArrayList<>(tserversSet);
-        Collections.shuffle(tserversList, RANDOM);
-
-        for (int i = 0; i < count; ++i) {
-          final String tserverName = tserversList.get(i).toString();
-          MasterClient.executeVoid(ctx, client -> {
-            log.info("Sending shutdown command to {} via MasterClientService", tserverName);
-            client.shutdownTabletServer(null, ctx.rpcCreds(), tserverName, false);
-          });
-        }
-
-        log.info("Waiting for tserver process{} to die", count == 1 ? "" : "es");
-        for (int i = 0; i < 10; ++i) {
-          List<ProcessReference> deadProcs = new ArrayList<>();
-          for (ProcessReference pr : getCluster().getProcesses().get(ServerType.TABLET_SERVER)) {
-            Process p = pr.getProcess();
-            if (!p.isAlive()) {
-              deadProcs.add(pr);
-            }
-          }
-          for (ProcessReference pr : deadProcs) {
-            log.info("Process {} is dead, informing cluster control about this", pr.getProcess());
-            getCluster().getClusterControl().killProcess(ServerType.TABLET_SERVER, pr);
-            --count;
-          }
-          if (count == 0) {
-            return;
-          } else {
-            Thread.sleep(MILLISECONDS.convert(2, SECONDS));
-          }
-        }
-        throw new IllegalStateException("Tablet servers didn't die!");
       }
+      List<TServerInstance> tserversList = new ArrayList<>(tserversSet);
+      Collections.shuffle(tserversList, RANDOM);
+
+      for (int i1 = 0; i1 < count; ++i1) {
+        final String tserverName = tserversList.get(i1).toString();
+        MasterClient.executeVoid(ctx, client -> {
+          log.info("Sending shutdown command to {} via MasterClientService", tserverName);
+          client.shutdownTabletServer(null, ctx.rpcCreds(), tserverName, false);
+        });
+      }
+
+      log.info("Waiting for tserver process{} to die", count == 1 ? "" : "es");
+      for (int i2 = 0; i2 < 10; ++i2) {
+        List<ProcessReference> deadProcs = new ArrayList<>();
+        for (ProcessReference pr1 : getCluster().getProcesses().get(ServerType.TABLET_SERVER)) {
+          Process p = pr1.getProcess();
+          if (!p.isAlive()) {
+            deadProcs.add(pr1);
+          }
+        }
+        for (ProcessReference pr2 : deadProcs) {
+          log.info("Process {} is dead, informing cluster control about this", pr2.getProcess());
+          getCluster().getClusterControl().killProcess(ServerType.TABLET_SERVER, pr2);
+          --count;
+        }
+        if (count == 0) {
+          return;
+        } else {
+          Thread.sleep(MILLISECONDS.convert(2, SECONDS));
+        }
+      }
+      throw new IllegalStateException("Tablet servers didn't die!");
     });
   }
 
@@ -232,7 +223,7 @@
       HostAndPort restartedServer = deadTabletsByServer.keySet().iterator().next();
       log.info("Restarting " + restartedServer);
       getCluster().getClusterControl().start(ServerType.TABLET_SERVER,
-          ImmutableMap.of(Property.TSERV_CLIENTPORT.getKey(), "" + restartedServer.getPort(),
+          Map.of(Property.TSERV_CLIENTPORT.getKey(), "" + restartedServer.getPort(),
               Property.TSERV_PORTSEARCH.getKey(), "false"),
           1);
 
@@ -266,12 +257,8 @@
 
   @BeforeClass
   public static void init() {
-    THREAD_POOL = Executors.newCachedThreadPool(new ThreadFactory() {
-      @Override
-      public Thread newThread(Runnable r) {
-        return new Thread(r, "Scanning deadline thread #" + threadCounter.incrementAndGet());
-      }
-    });
+    THREAD_POOL = Executors.newCachedThreadPool(
+        r -> new Thread(r, "Scanning deadline thread #" + threadCounter.incrementAndGet()));
   }
 
   @AfterClass
@@ -318,7 +305,7 @@
     private void scan(ClientContext ctx, String tableName) {
       Map<String,String> idMap = ctx.tableOperations().tableIdMap();
       String tableId = Objects.requireNonNull(idMap.get(tableName));
-      try (MetaDataTableScanner scanner = new MetaDataTableScanner(ctx, new Range())) {
+      try (var scanner = new MetaDataTableScanner(ctx, new Range(), MetadataTable.NAME)) {
         while (scanner.hasNext()) {
           TabletLocationState tls = scanner.next();
 
diff --git a/test/src/main/java/org/apache/accumulo/test/metrics/MetricsFileTailer.java b/test/src/main/java/org/apache/accumulo/test/metrics/MetricsFileTailer.java
new file mode 100644
index 0000000..7236e06
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/metrics/MetricsFileTailer.java
@@ -0,0 +1,255 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.metrics;
+
+import java.io.File;
+import java.io.RandomAccessFile;
+import java.net.URL;
+import java.util.Iterator;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.apache.commons.configuration2.Configuration;
+import org.apache.commons.configuration2.FileBasedConfiguration;
+import org.apache.commons.configuration2.PropertiesConfiguration;
+import org.apache.commons.configuration2.builder.FileBasedConfigurationBuilder;
+import org.apache.commons.configuration2.builder.fluent.Parameters;
+import org.apache.commons.configuration2.ex.ConfigurationException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * This class allows testing of the publishing to the hadoop metrics system by processing a file for
+ * metric records (written as a line.) The file should be configured using the hadoop metrics
+ * properties as a file based sink with the prefix that is provided on instantiation of the
+ * instance.
+ *
+ * This class will simulate tail-ing a file and is intended to be run in a separate thread. When the
+ * underlying file has data written, the vaule returned by getLastUpdate will change, and the last
+ * line can be retrieved with getLast().
+ */
+public class MetricsFileTailer implements Runnable, AutoCloseable {
+
+  private static final Logger log = LoggerFactory.getLogger(MetricsFileTailer.class);
+
+  private static final int BUFFER_SIZE = 4;
+
+  private final String metricsPrefix;
+
+  private Lock lock = new ReentrantLock();
+  private AtomicBoolean running = new AtomicBoolean(Boolean.TRUE);
+
+  private AtomicLong lastUpdate = new AtomicLong(0);
+  private long startTime = System.nanoTime();
+
+  private int lineCounter = 0;
+  private String[] lineBuffer = new String[BUFFER_SIZE];
+
+  private final String metricsFilename;
+
+  /**
+   * Create an instance that will tail a metrics file. The filename / path is determined by the
+   * hadoop-metrics-accumulo.properties sink configuration for the metrics prefix that is provided.
+   *
+   * @param metricsPrefix
+   *          the prefix in the metrics configuration.
+   */
+  public MetricsFileTailer(final String metricsPrefix) {
+
+    this.metricsPrefix = metricsPrefix;
+
+    Configuration sub = loadMetricsConfig();
+
+    // dump received configuration keys received.
+    if (log.isTraceEnabled()) {
+      Iterator<String> keys = sub.getKeys();
+      while (keys.hasNext()) {
+        log.trace("configuration key:{}", keys.next());
+      }
+    }
+
+    if (sub.containsKey("filename")) {
+      metricsFilename = sub.getString("filename");
+    } else {
+      metricsFilename = "";
+    }
+
+  }
+
+  /**
+   * Create an instance by specifying a file directly instead of using the metrics configuration -
+   * mainly for testing.
+   *
+   * @param metricsPrefix
+   *          generally can be ignored.
+   * @param filename
+   *          the path / file to be monitored.
+   */
+  MetricsFileTailer(final String metricsPrefix, final String filename) {
+    this.metricsPrefix = metricsPrefix;
+    metricsFilename = filename;
+  }
+
+  /**
+   * Look for the accumulo metrics configuration file on the classpath and return the subset for the
+   * http sink.
+   *
+   * @return a configuration with http sink properties.
+   */
+  private Configuration loadMetricsConfig() {
+    try {
+
+      final URL propUrl =
+          getClass().getClassLoader().getResource(MetricsTestSinkProperties.METRICS_PROP_FILENAME);
+
+      if (propUrl == null) {
+        throw new IllegalStateException(
+            "Could not find " + MetricsTestSinkProperties.METRICS_PROP_FILENAME + " on classpath");
+      }
+
+      String filename = propUrl.getFile();
+
+      Parameters params = new Parameters();
+      // Read data from this file
+      File propertiesFile = new File(filename);
+
+      FileBasedConfigurationBuilder<FileBasedConfiguration> builder =
+          new FileBasedConfigurationBuilder<FileBasedConfiguration>(PropertiesConfiguration.class)
+              .configure(params.fileBased().setFile(propertiesFile));
+
+      Configuration config = builder.getConfiguration();
+
+      final Configuration sub = config.subset(metricsPrefix);
+
+      if (log.isTraceEnabled()) {
+        log.trace("Config {}", config);
+        Iterator<String> iterator = sub.getKeys();
+        while (iterator.hasNext()) {
+          String key = iterator.next();
+          log.trace("'{}\'=\'{}\'", key, sub.getProperty(key));
+        }
+      }
+
+      return sub;
+
+    } catch (ConfigurationException ex) {
+      throw new IllegalStateException(
+          String.format("Could not find configuration file \'%s\' on classpath",
+              MetricsTestSinkProperties.METRICS_PROP_FILENAME));
+    }
+  }
+
+  /**
+   * Creates a marker value that changes each time a new line is detected. Clients can use this to
+   * determine if a call to getLast() will return a new value.
+   *
+   * @return a marker value set when a line is available.
+   */
+  public long getLastUpdate() {
+    return lastUpdate.get();
+  }
+
+  /**
+   * Get the last line seen in the file.
+   *
+   * @return the last line from the file.
+   */
+  public String getLast() {
+    lock.lock();
+    try {
+
+      int last = (lineCounter % BUFFER_SIZE) - 1;
+      if (last < 0) {
+        last = BUFFER_SIZE - 1;
+      }
+      return lineBuffer[last];
+    } finally {
+      lock.unlock();
+    }
+  }
+
+  /**
+   * A loop that polls for changes and when the file changes, put the last line in a buffer that can
+   * be retrieved by clients using getLast().
+   */
+  @Override
+  public void run() {
+
+    long filePos = 0;
+
+    File f = new File(metricsFilename);
+
+    while (running.get()) {
+
+      try {
+        Thread.sleep(5_000);
+      } catch (InterruptedException ex) {
+        running.set(Boolean.FALSE);
+        Thread.currentThread().interrupt();
+        return;
+      }
+
+      long len = f.length();
+
+      try {
+
+        // file truncated? reset position
+        if (len < filePos) {
+          filePos = 0;
+          lock.lock();
+          try {
+            for (int i = 0; i < BUFFER_SIZE; i++) {
+              lineBuffer[i] = "";
+            }
+            lineCounter = 0;
+          } finally {
+            lock.unlock();
+          }
+        }
+
+        if (len > filePos) {
+          // File must have had something added to it!
+          RandomAccessFile raf = new RandomAccessFile(f, "r");
+          raf.seek(filePos);
+          String line;
+          lock.lock();
+          try {
+            while ((line = raf.readLine()) != null) {
+              lineBuffer[lineCounter++ % BUFFER_SIZE] = line;
+            }
+
+            lastUpdate.set(System.nanoTime() - startTime);
+
+          } finally {
+            lock.unlock();
+          }
+          filePos = raf.getFilePointer();
+          raf.close();
+        }
+      } catch (Exception ex) {
+        log.info("Error processing metrics file {}", metricsFilename, ex);
+      }
+    }
+  }
+
+  @Override
+  public void close() {
+    running.set(Boolean.FALSE);
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationObserver.java b/test/src/main/java/org/apache/accumulo/test/metrics/MetricsTestSinkProperties.java
similarity index 66%
rename from core/src/main/java/org/apache/accumulo/core/conf/ConfigurationObserver.java
rename to test/src/main/java/org/apache/accumulo/test/metrics/MetricsTestSinkProperties.java
index 8a62d26..5317755 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationObserver.java
+++ b/test/src/main/java/org/apache/accumulo/test/metrics/MetricsTestSinkProperties.java
@@ -14,12 +14,15 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.conf;
+package org.apache.accumulo.test.metrics;
 
-public interface ConfigurationObserver {
-  void propertyChanged(String key);
+/**
+ * common properties used with metrics configuration.
+ */
+public class MetricsTestSinkProperties {
 
-  void propertiesChanged();
+  public static final String METRICS_PROP_FILENAME = "hadoop-metrics2-accumulo.properties";
+  public static final String ACC_GC_SINK_PREFIX = "accumulo.sink.file-gc";
+  public static final String ACC_MASTER_SINK_PREFIX = "accumulo.sink.file-master";
 
-  void sessionExpired();
 }
diff --git a/test/src/main/java/org/apache/accumulo/test/mrit/IntegrationTestMapReduce.java b/test/src/main/java/org/apache/accumulo/test/mrit/IntegrationTestMapReduce.java
index c8aabb5..c666c92 100644
--- a/test/src/main/java/org/apache/accumulo/test/mrit/IntegrationTestMapReduce.java
+++ b/test/src/main/java/org/apache/accumulo/test/mrit/IntegrationTestMapReduce.java
@@ -20,7 +20,6 @@
 import java.util.ArrayList;
 import java.util.List;
 
-import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.fs.Path;
@@ -146,7 +145,7 @@
         } else {
           log.info("{} failed", className);
           context.getCounter(TestCounts.FAIL).increment(1);
-          context.write(FAIL, new Text(className + "(" + StringUtils.join(failures, ", ") + ")"));
+          context.write(FAIL, new Text(className + "(" + String.join(", ", failures) + ")"));
         }
       } catch (Exception e) {
         // most likely JUnit issues, like no tests to run
diff --git a/test/src/main/java/org/apache/accumulo/test/performance/ContinuousIngest.java b/test/src/main/java/org/apache/accumulo/test/performance/ContinuousIngest.java
deleted file mode 100644
index e46d871..0000000
--- a/test/src/main/java/org/apache/accumulo/test/performance/ContinuousIngest.java
+++ /dev/null
@@ -1,277 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.test.performance;
-
-import static java.nio.charset.StandardCharsets.UTF_8;
-
-import java.io.BufferedReader;
-import java.io.IOException;
-import java.io.InputStreamReader;
-import java.security.SecureRandom;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.List;
-import java.util.Random;
-import java.util.UUID;
-import java.util.zip.CRC32;
-import java.util.zip.Checksum;
-
-import org.apache.accumulo.core.cli.ClientOpts;
-import org.apache.accumulo.core.client.Accumulo;
-import org.apache.accumulo.core.client.AccumuloClient;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.MutationsRejectedException;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.security.ColumnVisibility;
-import org.apache.accumulo.core.trace.TraceUtil;
-import org.apache.accumulo.core.util.FastFormat;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
-import org.apache.htrace.TraceScope;
-import org.apache.htrace.wrappers.TraceProxy;
-
-import com.beust.jcommander.Parameter;
-
-public class ContinuousIngest {
-
-  private static final byte[] EMPTY_BYTES = new byte[0];
-
-  private static List<ColumnVisibility> visibilities;
-
-  private static void initVisibilities(ContinuousOpts opts) throws Exception {
-    if (opts.visFile == null) {
-      visibilities = Collections.singletonList(new ColumnVisibility());
-      return;
-    }
-
-    visibilities = readVisFromFile(opts.visFile);
-  }
-
-  public static List<ColumnVisibility> readVisFromFile(String visFile) {
-    List<ColumnVisibility> vis = new ArrayList<>();
-
-    try (BufferedReader in = new BufferedReader(new InputStreamReader(
-        FileSystem.get(new Configuration()).open(new Path(visFile)), UTF_8))) {
-      String line;
-      while ((line = in.readLine()) != null) {
-        vis.add(new ColumnVisibility(line));
-      }
-    } catch (IOException e) {
-      System.out.println("ERROR reading visFile " + visFile + ": ");
-      e.printStackTrace();
-    }
-    return vis;
-  }
-
-  private static ColumnVisibility getVisibility(Random rand) {
-    return visibilities.get(rand.nextInt(visibilities.size()));
-  }
-
-  static class TestOpts extends ClientOpts {
-    @Parameter(names = "--table", description = "table to use")
-    String tableName = "ci";
-  }
-
-  public static void main(String[] args) throws Exception {
-
-    ContinuousOpts opts = new ContinuousOpts();
-    TestOpts clientOpts = new TestOpts();
-    try (TraceScope clientSpan =
-        clientOpts.parseArgsAndTrace(ContinuousIngest.class.getName(), args, opts)) {
-
-      initVisibilities(opts);
-
-      if (opts.min < 0 || opts.max < 0 || opts.max <= opts.min) {
-        throw new IllegalArgumentException("bad min and max");
-      }
-      try (AccumuloClient client = Accumulo.newClient().from(clientOpts.getClientProps()).build()) {
-
-        if (!client.tableOperations().exists(clientOpts.tableName)) {
-          throw new TableNotFoundException(null, clientOpts.tableName,
-              "Consult the README and create the table before starting ingest.");
-        }
-
-        BatchWriter bw = client.createBatchWriter(clientOpts.tableName);
-        bw = TraceProxy.trace(bw, TraceUtil.countSampler(1024));
-
-        Random r = new SecureRandom();
-
-        byte[] ingestInstanceId = UUID.randomUUID().toString().getBytes(UTF_8);
-
-        System.out.printf("UUID %d %s%n", System.currentTimeMillis(),
-            new String(ingestInstanceId, UTF_8));
-
-        long count = 0;
-        final int flushInterval = 1000000;
-        final int maxDepth = 25;
-
-        // always want to point back to flushed data. This way the previous item should
-        // always exist in accumulo when verifying data. To do this make insert N point
-        // back to the row from insert (N - flushInterval). The array below is used to keep
-        // track of this.
-        long[] prevRows = new long[flushInterval];
-        long[] firstRows = new long[flushInterval];
-        int[] firstColFams = new int[flushInterval];
-        int[] firstColQuals = new int[flushInterval];
-
-        long lastFlushTime = System.currentTimeMillis();
-
-        out: while (true) {
-          // generate first set of nodes
-          ColumnVisibility cv = getVisibility(r);
-
-          for (int index = 0; index < flushInterval; index++) {
-            long rowLong = genLong(opts.min, opts.max, r);
-            prevRows[index] = rowLong;
-            firstRows[index] = rowLong;
-
-            int cf = r.nextInt(opts.maxColF);
-            int cq = r.nextInt(opts.maxColQ);
-
-            firstColFams[index] = cf;
-            firstColQuals[index] = cq;
-
-            Mutation m =
-                genMutation(rowLong, cf, cq, cv, ingestInstanceId, count, null, opts.checksum);
-            count++;
-            bw.addMutation(m);
-          }
-
-          lastFlushTime = flush(bw, count, flushInterval, lastFlushTime);
-          if (count >= opts.num)
-            break out;
-
-          // generate subsequent sets of nodes that link to previous set of nodes
-          for (int depth = 1; depth < maxDepth; depth++) {
-            for (int index = 0; index < flushInterval; index++) {
-              long rowLong = genLong(opts.min, opts.max, r);
-              byte[] prevRow = genRow(prevRows[index]);
-              prevRows[index] = rowLong;
-              Mutation m = genMutation(rowLong, r.nextInt(opts.maxColF), r.nextInt(opts.maxColQ),
-                  cv, ingestInstanceId, count, prevRow, opts.checksum);
-              count++;
-              bw.addMutation(m);
-            }
-
-            lastFlushTime = flush(bw, count, flushInterval, lastFlushTime);
-            if (count >= opts.num)
-              break out;
-          }
-
-          // create one big linked list, this makes all of the first inserts
-          // point to something
-          for (int index = 0; index < flushInterval - 1; index++) {
-            Mutation m = genMutation(firstRows[index], firstColFams[index], firstColQuals[index],
-                cv, ingestInstanceId, count, genRow(prevRows[index + 1]), opts.checksum);
-            count++;
-            bw.addMutation(m);
-          }
-          lastFlushTime = flush(bw, count, flushInterval, lastFlushTime);
-          if (count >= opts.num)
-            break out;
-        }
-
-        bw.close();
-      }
-    }
-  }
-
-  private static long flush(BatchWriter bw, long count, final int flushInterval, long lastFlushTime)
-      throws MutationsRejectedException {
-    long t1 = System.currentTimeMillis();
-    bw.flush();
-    long t2 = System.currentTimeMillis();
-    System.out.printf("FLUSH %d %d %d %d %d%n", t2, (t2 - lastFlushTime), (t2 - t1), count,
-        flushInterval);
-    lastFlushTime = t2;
-    return lastFlushTime;
-  }
-
-  public static Mutation genMutation(long rowLong, int cfInt, int cqInt, ColumnVisibility cv,
-      byte[] ingestInstanceId, long count, byte[] prevRow, boolean checksum) {
-    // Adler32 is supposed to be faster, but according to wikipedia is not good for small data....
-    // so used CRC32 instead
-    CRC32 cksum = null;
-
-    byte[] rowString = genRow(rowLong);
-
-    byte[] cfString = FastFormat.toZeroPaddedString(cfInt, 4, 16, EMPTY_BYTES);
-    byte[] cqString = FastFormat.toZeroPaddedString(cqInt, 4, 16, EMPTY_BYTES);
-
-    if (checksum) {
-      cksum = new CRC32();
-      cksum.update(rowString);
-      cksum.update(cfString);
-      cksum.update(cqString);
-      cksum.update(cv.getExpression());
-    }
-
-    Mutation m = new Mutation(new Text(rowString));
-
-    m.put(new Text(cfString), new Text(cqString), cv,
-        createValue(ingestInstanceId, count, prevRow, cksum));
-    return m;
-  }
-
-  public static final long genLong(long min, long max, Random r) {
-    return ((r.nextLong() & 0x7fffffffffffffffL) % (max - min)) + min;
-  }
-
-  static final byte[] genRow(long min, long max, Random r) {
-    return genRow(genLong(min, max, r));
-  }
-
-  static final byte[] genRow(long rowLong) {
-    return FastFormat.toZeroPaddedString(rowLong, 16, 16, EMPTY_BYTES);
-  }
-
-  private static Value createValue(byte[] ingestInstanceId, long count, byte[] prevRow,
-      Checksum cksum) {
-    int dataLen = ingestInstanceId.length + 16 + (prevRow == null ? 0 : prevRow.length) + 3;
-    if (cksum != null)
-      dataLen += 8;
-    byte[] val = new byte[dataLen];
-    System.arraycopy(ingestInstanceId, 0, val, 0, ingestInstanceId.length);
-    int index = ingestInstanceId.length;
-    val[index++] = ':';
-    int added = FastFormat.toZeroPaddedString(val, index, count, 16, 16, EMPTY_BYTES);
-    if (added != 16)
-      throw new RuntimeException(" " + added);
-    index += 16;
-    val[index++] = ':';
-    if (prevRow != null) {
-      System.arraycopy(prevRow, 0, val, index, prevRow.length);
-      index += prevRow.length;
-    }
-
-    val[index++] = ':';
-
-    if (cksum != null) {
-      cksum.update(val, 0, index);
-      cksum.getValue();
-      FastFormat.toZeroPaddedString(val, index, cksum.getValue(), 8, 16, EMPTY_BYTES);
-    }
-
-    // System.out.println("val "+new String(val));
-
-    return new Value(val);
-  }
-}
diff --git a/test/src/main/java/org/apache/accumulo/test/performance/ContinuousOpts.java b/test/src/main/java/org/apache/accumulo/test/performance/ContinuousOpts.java
deleted file mode 100644
index dfbfafa..0000000
--- a/test/src/main/java/org/apache/accumulo/test/performance/ContinuousOpts.java
+++ /dev/null
@@ -1,57 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.test.performance;
-
-import com.beust.jcommander.IStringConverter;
-import com.beust.jcommander.Parameter;
-
-/**
- * Common CLI arguments for the Continuous Ingest suite.
- */
-public class ContinuousOpts {
-
-  public static class ShortConverter implements IStringConverter<Short> {
-    @Override
-    public Short convert(String value) {
-      return Short.valueOf(value);
-    }
-  }
-
-  @Parameter(names = "--min", description = "lowest random row number to use")
-  long min = 0;
-
-  @Parameter(names = "--max", description = "maximum random row number to use")
-  long max = Long.MAX_VALUE;
-
-  @Parameter(names = "--num", description = "the number of entries to ingest")
-  long num = Long.MAX_VALUE;
-
-  @Parameter(names = "--maxColF", description = "maximum column family value to use",
-      converter = ShortConverter.class)
-  short maxColF = Short.MAX_VALUE;
-
-  @Parameter(names = "--maxColQ", description = "maximum column qualifier value to use",
-      converter = ShortConverter.class)
-  short maxColQ = Short.MAX_VALUE;
-
-  @Parameter(names = "--addCheckSum", description = "turn on checksums")
-  boolean checksum = false;
-
-  @Parameter(names = "--visibilities",
-      description = "read the visibilities to ingest with from a file")
-  String visFile = null;
-}
diff --git a/test/src/main/java/org/apache/accumulo/test/performance/NullTserver.java b/test/src/main/java/org/apache/accumulo/test/performance/NullTserver.java
index be015dd..6a94f96 100644
--- a/test/src/main/java/org/apache/accumulo/test/performance/NullTserver.java
+++ b/test/src/main/java/org/apache/accumulo/test/performance/NullTserver.java
@@ -52,6 +52,7 @@
 import org.apache.accumulo.core.dataImpl.thrift.TSummaryRequest;
 import org.apache.accumulo.core.dataImpl.thrift.UpdateErrors;
 import org.apache.accumulo.core.master.thrift.TabletServerStatus;
+import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.securityImpl.thrift.TCredentials;
 import org.apache.accumulo.core.tabletserver.thrift.ActiveCompaction;
 import org.apache.accumulo.core.tabletserver.thrift.ActiveScan;
@@ -292,7 +293,7 @@
     // modify metadata
     int zkTimeOut =
         (int) DefaultConfiguration.getInstance().getTimeInMillis(Property.INSTANCE_ZK_TIMEOUT);
-    SiteConfiguration siteConfig = new SiteConfiguration();
+    var siteConfig = SiteConfiguration.auto();
     ServerContext context = new ServerContext(siteConfig, opts.iname, opts.keepers, zkTimeOut);
     TransactionWatcher watcher = new TransactionWatcher(context);
     ThriftClientHandler tch = new ThriftClientHandler(context, watcher);
@@ -309,7 +310,7 @@
     // read the locations for the table
     Range tableRange = new KeyExtent(tableId, null, null).toMetadataRange();
     List<Assignment> assignments = new ArrayList<>();
-    try (MetaDataTableScanner s = new MetaDataTableScanner(context, tableRange)) {
+    try (var s = new MetaDataTableScanner(context, tableRange, MetadataTable.NAME)) {
       long randomSessionID = opts.port;
       TServerInstance instance = new TServerInstance(addr, randomSessionID);
 
diff --git a/test/src/main/java/org/apache/accumulo/test/performance/RollWALPerformanceIT.java b/test/src/main/java/org/apache/accumulo/test/performance/RollWALPerformanceIT.java
deleted file mode 100644
index 9d6d41e..0000000
--- a/test/src/main/java/org/apache/accumulo/test/performance/RollWALPerformanceIT.java
+++ /dev/null
@@ -1,123 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.test.performance;
-
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assume.assumeFalse;
-
-import java.util.SortedSet;
-import java.util.TreeSet;
-
-import org.apache.accumulo.core.client.Accumulo;
-import org.apache.accumulo.core.client.AccumuloClient;
-import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.metadata.MetadataTable;
-import org.apache.accumulo.core.metadata.RootTable;
-import org.apache.accumulo.minicluster.ServerType;
-import org.apache.accumulo.miniclusterImpl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.test.categories.MiniClusterOnlyTests;
-import org.apache.accumulo.test.categories.PerformanceTests;
-import org.apache.accumulo.test.functional.ConfigurableMacBase;
-import org.apache.accumulo.test.mrit.IntegrationTestMapReduce;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.io.Text;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-
-@Category({MiniClusterOnlyTests.class, PerformanceTests.class})
-public class RollWALPerformanceIT extends ConfigurableMacBase {
-
-  @BeforeClass
-  public static void checkMR() {
-    assumeFalse(IntegrationTestMapReduce.isMapReduce());
-  }
-
-  @Override
-  protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
-    cfg.setProperty(Property.TSERV_WAL_REPLICATION, "1");
-    cfg.setProperty(Property.TSERV_WALOG_MAX_SIZE, "5M");
-    cfg.setProperty(Property.TSERV_WALOG_MAX_REFERENCED, "100");
-    cfg.setProperty(Property.GC_CYCLE_START, "1s");
-    cfg.setProperty(Property.GC_CYCLE_DELAY, "1s");
-    cfg.useMiniDFS(true);
-  }
-
-  @Override
-  protected int defaultTimeoutSeconds() {
-    return 5 * 60;
-  }
-
-  private long ingest(AccumuloClient c) throws Exception {
-    final String tableName = getUniqueNames(1)[0];
-
-    log.info("Creating the table");
-    c.tableOperations().create(tableName);
-
-    log.info("Splitting the table");
-    final long SPLIT_COUNT = 100;
-    final long distance = Long.MAX_VALUE / SPLIT_COUNT;
-    final SortedSet<Text> splits = new TreeSet<>();
-    for (int i = 1; i < SPLIT_COUNT; i++) {
-      splits.add(new Text(String.format("%016x", i * distance)));
-    }
-    c.tableOperations().addSplits(tableName, splits);
-
-    log.info("Waiting for balance");
-    c.instanceOperations().waitForBalance();
-
-    log.info("Starting ingest");
-    final long start = System.nanoTime();
-    // Load 50K 100 byte entries
-    ContinuousIngest.main(new String[] {"-c", cluster.getClientPropsPath(), "--table", tableName,
-        "--num", Long.toString(50 * 1000)});
-    final long result = System.nanoTime() - start;
-    log.debug(String.format("Finished in %,d ns", result));
-    log.debug("Dropping table");
-    c.tableOperations().delete(tableName);
-    return result;
-  }
-
-  private long getAverage(AccumuloClient c) throws Exception {
-    final int REPEAT = 3;
-    long totalTime = 0;
-    for (int i = 0; i < REPEAT; i++) {
-      totalTime += ingest(c);
-    }
-    return totalTime / REPEAT;
-  }
-
-  @Test
-  public void testWalPerformanceOnce() throws Exception {
-    try (AccumuloClient c = Accumulo.newClient().from(getClientProperties()).build()) {
-      // get time with a small WAL, which will cause many WAL roll-overs
-      long avg1 = getAverage(c);
-      // use a bigger WAL max size to eliminate WAL roll-overs
-      c.instanceOperations().setProperty(Property.TSERV_WALOG_MAX_SIZE.getKey(), "1G");
-      c.tableOperations().flush(MetadataTable.NAME, null, null, true);
-      c.tableOperations().flush(RootTable.NAME, null, null, true);
-      getCluster().getClusterControl().stop(ServerType.TABLET_SERVER);
-      getCluster().start();
-      long avg2 = getAverage(c);
-      log.info(String.format("Average run time with small WAL %,d with large WAL %,d", avg1, avg2));
-      assertTrue(avg1 > avg2);
-      double percent = (100. * avg1) / avg2;
-      log.info(String.format("Percent of large log: %.2f%%", percent));
-    }
-  }
-
-}
diff --git a/test/src/main/java/org/apache/accumulo/test/performance/scan/CollectTabletStats.java b/test/src/main/java/org/apache/accumulo/test/performance/scan/CollectTabletStats.java
index 092c333..118e164 100644
--- a/test/src/main/java/org/apache/accumulo/test/performance/scan/CollectTabletStats.java
+++ b/test/src/main/java/org/apache/accumulo/test/performance/scan/CollectTabletStats.java
@@ -388,8 +388,10 @@
     return tabletsToTest;
   }
 
-  private static List<FileRef> getTabletFiles(ServerContext context, KeyExtent ke) {
-    return new ArrayList<>(MetadataTableUtil.getDataFileSizes(ke, context).keySet());
+  private static List<FileRef> getTabletFiles(ServerContext context, KeyExtent ke)
+      throws IOException {
+    return new ArrayList<>(
+        MetadataTableUtil.getFileAndLogEntries(context, ke).getSecond().keySet());
   }
 
   private static void reportHdfsBlockLocations(ServerContext context, List<FileRef> files)
diff --git a/test/src/main/java/org/apache/accumulo/test/replication/MockReplicaSystem.java b/test/src/main/java/org/apache/accumulo/test/replication/MockReplicaSystem.java
index 5920d6f..b2192ad 100644
--- a/test/src/main/java/org/apache/accumulo/test/replication/MockReplicaSystem.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/MockReplicaSystem.java
@@ -24,7 +24,6 @@
 import org.apache.accumulo.server.replication.ReplicaSystem;
 import org.apache.accumulo.server.replication.ReplicaSystemHelper;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
-import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.fs.Path;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -86,7 +85,7 @@
 
   @Override
   public void configure(ServerContext context, String configuration) {
-    if (StringUtils.isBlank(configuration)) {
+    if (configuration.isBlank()) {
       log.debug("No configuration, using default sleep of {}", sleep);
       return;
     }
diff --git a/test/src/main/java/org/apache/accumulo/test/replication/StatusMakerIT.java b/test/src/main/java/org/apache/accumulo/test/replication/StatusMakerIT.java
index 205f40f..e66fff9 100644
--- a/test/src/main/java/org/apache/accumulo/test/replication/StatusMakerIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/StatusMakerIT.java
@@ -121,7 +121,7 @@
         TableId tableId = StatusSection.getTableId(entry.getKey());
 
         assertTrue("Found unexpected file: " + file, files.contains(file.toString()));
-        assertEquals(fileToTableId.get(file.toString()), new Integer(tableId.canonical()));
+        assertEquals(fileToTableId.get(file.toString()), Integer.valueOf(tableId.canonical()));
         timeCreated = fileToTimeCreated.get(file.toString());
         assertNotNull(timeCreated);
         assertEquals(StatusUtil.fileCreated(timeCreated), Status.parseFrom(entry.getValue().get()));
diff --git a/test/src/main/java/org/apache/accumulo/test/replication/WorkMakerIT.java b/test/src/main/java/org/apache/accumulo/test/replication/WorkMakerIT.java
index f3988df..0c39b53 100644
--- a/test/src/main/java/org/apache/accumulo/test/replication/WorkMakerIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/WorkMakerIT.java
@@ -47,7 +47,6 @@
 import org.junit.Before;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.Iterables;
 
 public class WorkMakerIT extends ConfigurableMacBase {
@@ -116,7 +115,7 @@
       expected = new ReplicationTarget("remote_cluster_1", "4", tableId);
       workMaker.setBatchWriter(bw);
       workMaker.addWorkRecord(new Text(file), StatusUtil.fileCreatedValue(timeCreated),
-          ImmutableMap.of("remote_cluster_1", "4"), tableId);
+          Map.of("remote_cluster_1", "4"), tableId);
     }
 
     // Scan over just the WorkSection
@@ -158,8 +157,8 @@
 
       MockWorkMaker workMaker = new MockWorkMaker(client);
 
-      Map<String,String> targetClusters = ImmutableMap.of("remote_cluster_1", "4",
-          "remote_cluster_2", "6", "remote_cluster_3", "8");
+      Map<String,String> targetClusters =
+          Map.of("remote_cluster_1", "4", "remote_cluster_2", "6", "remote_cluster_3", "8");
 
       for (Entry<String,String> cluster : targetClusters.entrySet()) {
         expectedTargets.add(new ReplicationTarget(cluster.getKey(), cluster.getValue(), tableId));
diff --git a/test/src/main/java/org/apache/accumulo/test/server/security/SystemCredentialsIT.java b/test/src/main/java/org/apache/accumulo/test/server/security/SystemCredentialsIT.java
index 222d757..7f35548 100644
--- a/test/src/main/java/org/apache/accumulo/test/server/security/SystemCredentialsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/server/security/SystemCredentialsIT.java
@@ -58,12 +58,13 @@
   }
 
   public static void main(final String[] args) throws AccumuloException, TableNotFoundException {
-    SiteConfiguration siteConfig = new SiteConfiguration();
+    var siteConfig = SiteConfiguration.auto();
     try (ServerContext context = new ServerContext(siteConfig)) {
       Credentials creds;
       String badInstanceID = SystemCredentials.class.getName();
-      if (args.length < 2)
+      if (args.length < 2) {
         throw new RuntimeException("Incorrect usage; expected to be run by test only");
+      }
       switch (args[0]) {
         case "bad":
           creds = SystemCredentials.get(badInstanceID, siteConfig);
diff --git a/test/src/main/java/org/apache/accumulo/test/upgrade/GCUpgrade9to10TestIT.java b/test/src/main/java/org/apache/accumulo/test/upgrade/GCUpgrade9to10TestIT.java
new file mode 100644
index 0000000..b4b231a
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/upgrade/GCUpgrade9to10TestIT.java
@@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.upgrade;
+
+import static org.apache.accumulo.fate.util.UtilWaitThread.sleepUninterruptibly;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.TreeMap;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.accumulo.core.Constants;
+import org.apache.accumulo.core.client.Accumulo;
+import org.apache.accumulo.core.client.AccumuloClient;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.metadata.schema.Ample;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.security.TablePermission;
+import org.apache.accumulo.fate.zookeeper.ZooLock;
+import org.apache.accumulo.fate.zookeeper.ZooReaderWriter;
+import org.apache.accumulo.master.upgrade.Upgrader9to10;
+import org.apache.accumulo.minicluster.MemoryUnit;
+import org.apache.accumulo.minicluster.ServerType;
+import org.apache.accumulo.miniclusterImpl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.miniclusterImpl.ProcessNotFoundException;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.io.Text;
+import org.apache.zookeeper.KeeperException;
+import org.junit.Test;
+
+import com.google.common.collect.Iterators;
+
+public class GCUpgrade9to10TestIT extends ConfigurableMacBase {
+  private static final String OUR_SECRET = "itsreallysecret";
+  private static final String OLDDELPREFIX = "~del";
+  private static final Upgrader9to10 upgrader = new Upgrader9to10();
+
+  @Override
+  public int defaultTimeoutSeconds() {
+    return 5 * 60;
+  }
+
+  @Override
+  public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
+    cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "15s");
+    cfg.setProperty(Property.INSTANCE_SECRET, OUR_SECRET);
+    cfg.setDefaultMemory(64, MemoryUnit.MEGABYTE);
+    cfg.setMemory(ServerType.MASTER, 16, MemoryUnit.MEGABYTE);
+    cfg.setMemory(ServerType.ZOOKEEPER, 32, MemoryUnit.MEGABYTE);
+    cfg.setProperty(Property.GC_CYCLE_START, "1");
+    cfg.setProperty(Property.GC_CYCLE_DELAY, "1");
+    cfg.setProperty(Property.GC_PORT, "0");
+    cfg.setProperty(Property.TSERV_MAXMEM, "5K");
+    cfg.setProperty(Property.TSERV_MAJC_DELAY, "1");
+
+    // use raw local file system so walogs sync and flush will work
+    hadoopCoreSite.set("fs.file.impl", RawLocalFileSystem.class.getName());
+  }
+
+  private void killMacGc() throws ProcessNotFoundException, InterruptedException, KeeperException {
+    // kill gc started by MAC
+    getCluster().killProcess(ServerType.GARBAGE_COLLECTOR,
+        getCluster().getProcesses().get(ServerType.GARBAGE_COLLECTOR).iterator().next());
+    // delete lock in zookeeper if there, this will allow next GC to start quickly
+    String path = getServerContext().getZooKeeperRoot() + Constants.ZGC_LOCK;
+    ZooReaderWriter zk = new ZooReaderWriter(cluster.getZooKeepers(), 30000, OUR_SECRET);
+    try {
+      ZooLock.deleteLock(zk, path);
+    } catch (IllegalStateException e) {
+      log.error("Unable to delete ZooLock for mini accumulo-gc", e);
+    }
+
+    assertNull(getCluster().getProcesses().get(ServerType.GARBAGE_COLLECTOR));
+  }
+
+  @Test
+  public void gcUpgradeRootTableDeletesIT() throws Exception {
+    gcUpgradeDeletesTest(Ample.DataLevel.METADATA, 3);
+  }
+
+  @Test
+  public void gcUpgradeMetadataTableDeletesIT() throws Exception {
+    gcUpgradeDeletesTest(Ample.DataLevel.USER, 3);
+  }
+
+  @Test
+  public void gcUpgradeNoDeletesIT() throws Exception {
+    gcUpgradeDeletesTest(Ample.DataLevel.METADATA, 0);
+
+  }
+
+  /**
+   * This is really hard to make happen - the minicluster can only use so little memory to start up.
+   * The {@link org.apache.accumulo.master.upgrade.Upgrader9to10} CANDIDATE_MEMORY_PERCENTAGE can be
+   * adjusted.
+   */
+  @Test
+  public void gcUpgradeOutofMemoryTest() throws Exception {
+    killMacGc(); // we do not want anything deleted
+
+    int somebignumber = 100000;
+    String longpathname = "aaaaaaaaaabbbbbbbbbbccccccccccddddddddddeeeeeeeeee"
+        + "ffffffffffgggggggggghhhhhhhhhhiiiiiiiiiijjjjjjjjjj"
+        + "kkkkkkkkkkkkkkkkkklllllllllllllllllllllmmmmmmmmmmmmmmmmmnnnnnnnnnnnnnnnn";
+    longpathname += longpathname; // make it even longer
+    Ample.DataLevel level = Ample.DataLevel.USER;
+
+    log.info("Filling metadata table with lots of bogus delete flags");
+    try (AccumuloClient c = Accumulo.newClient().from(getClientProperties()).build()) {
+      addEntries(c, level.metaTable(), somebignumber, longpathname);
+
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
+      upgrader.upgradeFileDeletes(getServerContext(), level);
+
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
+      Range range = MetadataSchema.DeletesSection.getRange();
+      Scanner scanner;
+      try {
+        scanner = c.createScanner(level.metaTable(), Authorizations.EMPTY);
+      } catch (TableNotFoundException e) {
+        throw new RuntimeException(e);
+      }
+      scanner.setRange(range);
+      assertEquals(somebignumber, Iterators.size(scanner.iterator()));
+    }
+  }
+
+  private void gcUpgradeDeletesTest(Ample.DataLevel level, int count) throws Exception {
+    killMacGc();// we do not want anything deleted
+
+    log.info("Testing delete upgrades for {}", level.metaTable());
+    try (AccumuloClient c = Accumulo.newClient().from(getClientProperties()).build()) {
+
+      Map<String,String> expected = addEntries(c, level.metaTable(), count, "somefile");
+      Map<String,String> actual = new HashMap<>();
+
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
+      upgrader.upgradeFileDeletes(getServerContext(), level);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
+      Range range = MetadataSchema.DeletesSection.getRange();
+
+      Scanner scanner;
+      try {
+        scanner = c.createScanner(level.metaTable(), Authorizations.EMPTY);
+      } catch (TableNotFoundException e) {
+        throw new RuntimeException(e);
+      }
+      scanner.setRange(range);
+      scanner.iterator().forEachRemaining(entry -> {
+        actual.put(entry.getKey().getRow().toString(), entry.getValue().toString());
+      });
+
+      assertEquals(expected, actual);
+
+      // ENSURE IDEMPOTENCE - run upgrade again to ensure nothing is changed because there is
+      // nothing to change
+      upgrader.upgradeFileDeletes(getServerContext(), level);
+      try {
+        scanner = c.createScanner(level.metaTable(), Authorizations.EMPTY);
+      } catch (TableNotFoundException e) {
+        throw new RuntimeException(e);
+      }
+      scanner.setRange(range);
+      actual.clear();
+      scanner.iterator().forEachRemaining(entry -> {
+        actual.put(entry.getKey().getRow().toString(), entry.getValue().toString());
+      });
+      assertEquals(expected, actual);
+    }
+  }
+
+  private Mutation createOldDelMutation(String path, String cf, String cq, String val) {
+    Text row = new Text(OLDDELPREFIX + path);
+    Mutation delFlag = new Mutation(row);
+    delFlag.put(cf, cq, val);
+    return delFlag;
+  }
+
+  private Map<String,String> addEntries(AccumuloClient client, String table, int count,
+      String filename) throws Exception {
+    client.securityOperations().grantTablePermission(client.whoami(), table, TablePermission.WRITE);
+    Map<String,String> expected = new TreeMap<>();
+    try (BatchWriter bw = client.createBatchWriter(table)) {
+      for (int i = 0; i < count; ++i) {
+        String longpath = String.format("hdfs://localhost:8020/%020d/%s", i, filename);
+        Mutation delFlag = createOldDelMutation(longpath, "", "", "");
+        bw.addMutation(delFlag);
+        expected.put(MetadataSchema.DeletesSection.encodeRow(longpath),
+            Upgrader9to10.UPGRADED.toString());
+      }
+      return expected;
+    }
+  }
+
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/util/CertUtils.java b/test/src/main/java/org/apache/accumulo/test/util/CertUtils.java
index 349fad5..2ed1134 100644
--- a/test/src/main/java/org/apache/accumulo/test/util/CertUtils.java
+++ b/test/src/main/java/org/apache/accumulo/test/util/CertUtils.java
@@ -127,9 +127,9 @@
     @SuppressFBWarnings(value = "PATH_TRAVERSAL_IN", justification = "path provided by test")
     public SiteConfiguration getSiteConfiguration() {
       if (accumuloPropsFile == null) {
-        return new SiteConfiguration();
+        return SiteConfiguration.auto();
       } else {
-        return new SiteConfiguration(new File(accumuloPropsFile));
+        return SiteConfiguration.fromFile(new File(accumuloPropsFile)).build();
       }
     }
   }
diff --git a/test/src/main/java/org/apache/accumulo/test/util/SerializationUtil.java b/test/src/main/java/org/apache/accumulo/test/util/SerializationUtil.java
index d68ad47..28273e6 100644
--- a/test/src/main/java/org/apache/accumulo/test/util/SerializationUtil.java
+++ b/test/src/main/java/org/apache/accumulo/test/util/SerializationUtil.java
@@ -65,8 +65,8 @@
           classname + " is not a subclass of " + parentClass.getName(), e);
     }
     try {
-      return cm.newInstance();
-    } catch (InstantiationException | IllegalAccessException e) {
+      return cm.getDeclaredConstructor().newInstance();
+    } catch (ReflectiveOperationException e) {
       throw new IllegalArgumentException("can't instantiate new instance of " + cm.getName(), e);
     }
   }
diff --git a/test/src/test/resources/hadoop-metrics2-accumulo.properties b/test/src/main/resources/hadoop-metrics2-accumulo.properties
similarity index 86%
rename from test/src/test/resources/hadoop-metrics2-accumulo.properties
rename to test/src/main/resources/hadoop-metrics2-accumulo.properties
index e2eb761..e869144 100644
--- a/test/src/test/resources/hadoop-metrics2-accumulo.properties
+++ b/test/src/main/resources/hadoop-metrics2-accumulo.properties
@@ -31,13 +31,10 @@
 accumulo.sink.file-all.class=org.apache.hadoop.metrics2.sink.FileSink
 accumulo.sink.file-all.filename=./target/it.all.metrics
 
-accumulo.sink.test-sink.class=org.apache.accumulo.test.functional.util.Metrics2TestSink
-accumulo.sink.test-sink.context=master
-accumulo.sink.test-sink.filename=test.metrics
-accumulo.sink.test-sink.period=7
-
-# accumulo.sink.test-sink.context=*
-# accumulo.sink.test-sink.period=5
+accumulo.sink.file-gc.class=org.apache.hadoop.metrics2.sink.FileSink
+accumulo.sink.file-gc.context=accgc
+accumulo.sink.file-gc.filename=./target/accgc.metrics
+accumulo.sink.file-gc.period=5
 
 # File sink for tserver metrics
 # accumulo.sink.file-tserver.class=org.apache.hadoop.metrics2.sink.FileSink
diff --git a/test/src/test/java/org/apache/accumulo/test/constraints/AlphaNumKeyConstraintTest.java b/test/src/test/java/org/apache/accumulo/test/constraints/AlphaNumKeyConstraintTest.java
index 042ee77..2f31ffc 100644
--- a/test/src/test/java/org/apache/accumulo/test/constraints/AlphaNumKeyConstraintTest.java
+++ b/test/src/test/java/org/apache/accumulo/test/constraints/AlphaNumKeyConstraintTest.java
@@ -19,13 +19,13 @@
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNull;
 
+import java.util.List;
+
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableList;
-
 public class AlphaNumKeyConstraintTest {
 
   private AlphaNumKeyConstraint ankc = new AlphaNumKeyConstraint();
@@ -39,9 +39,8 @@
     // Check that violations are in row, cf, cq order
     Mutation badMutation = new Mutation(new Text("Row#1"));
     badMutation.put(new Text("Colf$2"), new Text("Colq%3"), new Value("value".getBytes()));
-    assertEquals(
-        ImmutableList.of(AlphaNumKeyConstraint.NON_ALPHA_NUM_ROW,
-            AlphaNumKeyConstraint.NON_ALPHA_NUM_COLF, AlphaNumKeyConstraint.NON_ALPHA_NUM_COLQ),
+    assertEquals(List.of(AlphaNumKeyConstraint.NON_ALPHA_NUM_ROW,
+        AlphaNumKeyConstraint.NON_ALPHA_NUM_COLF, AlphaNumKeyConstraint.NON_ALPHA_NUM_COLQ),
         ankc.check(null, badMutation));
   }
 
diff --git a/test/src/test/java/org/apache/accumulo/test/iterator/RegExTest.java b/test/src/test/java/org/apache/accumulo/test/iterator/RegExTest.java
index 890b823..5825e4f 100644
--- a/test/src/test/java/org/apache/accumulo/test/iterator/RegExTest.java
+++ b/test/src/test/java/org/apache/accumulo/test/iterator/RegExTest.java
@@ -32,8 +32,6 @@
 import org.junit.BeforeClass;
 import org.junit.Test;
 
-import com.google.common.collect.ImmutableSet;
-
 public class RegExTest {
 
   private static TreeMap<Key,Value> data = new TreeMap<>();
@@ -42,11 +40,13 @@
   public static void setupTests() {
 
     ArrayList<Character> chars = new ArrayList<>();
-    for (char c = 'a'; c <= 'z'; c++)
+    for (char c = 'a'; c <= 'z'; c++) {
       chars.add(c);
+    }
 
-    for (char c = '0'; c <= '9'; c++)
+    for (char c = '0'; c <= '9'; c++) {
       chars.add(c);
+    }
 
     // insert some data into accumulo
     for (Character rc : chars) {
@@ -61,8 +61,9 @@
   }
 
   private void check(String regex, String val) throws Exception {
-    if (regex != null && !val.matches(regex))
+    if (regex != null && !val.matches(regex)) {
       throw new Exception(" " + val + " does not match " + regex);
+    }
   }
 
   private void check(String regex, Text val) throws Exception {
@@ -112,7 +113,7 @@
       String valRegEx, int expected) throws Exception {
 
     SortedKeyValueIterator<Key,Value> source = new SortedMapIterator(data);
-    Set<ByteSequence> es = ImmutableSet.of();
+    Set<ByteSequence> es = Set.of();
     IteratorSetting is = new IteratorSetting(50, "regex", RegExFilter.class);
     RegExFilter.setRegexs(is, rowRegEx, cfRegEx, cqRegEx, valRegEx, false);
     RegExFilter iter = new RegExFilter();
diff --git a/test/src/test/java/org/apache/accumulo/test/metrics/MetricsFileTailerTest.java b/test/src/test/java/org/apache/accumulo/test/metrics/MetricsFileTailerTest.java
new file mode 100644
index 0000000..59359a2
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/metrics/MetricsFileTailerTest.java
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.metrics;
+
+import static org.junit.Assert.assertTrue;
+
+import java.io.FileWriter;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.nio.file.FileSystems;
+import java.nio.file.Files;
+
+import org.junit.AfterClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class MetricsFileTailerTest {
+
+  private static final Logger log = LoggerFactory.getLogger(MetricsFileTailerTest.class);
+
+  private static final String TEST_OUTFILE_NAME = "/tmp/testfile.txt";
+  private static final String SUCCESS = "success";
+
+  @AfterClass
+  public static void cleanup() {
+    try {
+      Files.deleteIfExists(FileSystems.getDefault().getPath(TEST_OUTFILE_NAME));
+    } catch (IOException ex) {
+      log.trace("Failed to clean-up test file " + TEST_OUTFILE_NAME, ex);
+    }
+  }
+
+  /**
+   * Create a file tailer and then write some lines and validate the tailer returns the last line.
+   */
+  @Test
+  public void fileUpdates() {
+
+    MetricsFileTailer tailer = new MetricsFileTailer("foo", TEST_OUTFILE_NAME);
+
+    Thread t = new Thread(tailer);
+    t.start();
+
+    long lastUpdate = tailer.getLastUpdate();
+
+    writeToFile();
+
+    boolean passed = Boolean.FALSE;
+
+    int count = 0;
+    while (count++ < 5) {
+      if (lastUpdate != tailer.getLastUpdate()) {
+        lastUpdate = tailer.getLastUpdate();
+        log.trace("{} - {}", tailer.getLastUpdate(), tailer.getLast());
+        if (SUCCESS.compareTo(tailer.getLast()) == 0) {
+          passed = Boolean.TRUE;
+          break;
+        }
+      } else {
+        log.trace("no change");
+      }
+      try {
+        Thread.sleep(5_000);
+      } catch (InterruptedException ex) {
+        // empty
+      }
+    }
+
+    try {
+      tailer.close();
+    } catch (Exception ex) {
+      log.trace("Failed to close file tailer on " + TEST_OUTFILE_NAME, ex);
+    }
+    assertTrue(passed);
+  }
+
+  /**
+   * Simulate write record(s) to the file.
+   */
+  private void writeToFile() {
+    try (FileWriter writer = new FileWriter(TEST_OUTFILE_NAME, true);
+        PrintWriter printWriter = new PrintWriter(writer)) {
+      printWriter.println("foo");
+      // needs to be last line for test to pass
+      printWriter.println(SUCCESS);
+      printWriter.flush();
+    } catch (IOException ex) {
+      throw new IllegalStateException("failed to write data to test file", ex);
+    }
+  }
+}