Merge branch '1.8'
diff --git a/README.md b/README.md
index fd57394..4cca80d 100644
--- a/README.md
+++ b/README.md
@@ -15,29 +15,31 @@
 limitations under the License.
 -->
 
-Apache Accumulo
-===============
+[![Apache Accumulo][logo]][accumulo]
+--
+[![Build Status][ti]][tl] [![Maven Central][mi]][ml] [![Javadoc][ji]][jl] [![Apache License][li]][ll]
 
-The [Apache Accumulo™][1] sorted, distributed key/value store is a robust,
-scalable, high performance data storage and retrieval system.  Apache Accumulo
-is based on Google's [BigTable][4] design and is built on top of Apache
-[Hadoop][5], [Zookeeper][6], and [Thrift][7]. Apache Accumulo features a few
-novel improvements on the BigTable design in the form of cell-based access
-control and a server-side programming mechanism that can modify key/value pairs
-at various points in the data management process. Other notable improvements
-and feature are outlined [here][8].
+[Apache Accumulo][accumulo] is a sorted, distributed key/value store that
+provides robust, scalable data storage and retrieval.
 
-To install and run an Accumulo binary distribution, follow the [install][2]
-instructions.
+Apache Accumulo is based on Google's [BigTable] design and is built on top of Apache
+[Hadoop], [Zookeeper], and [Thrift].  It has several novel [features] such as cell-based
+access control and a server-side programming mechanism that can modify key/value pairs
+at various points in the data management process.
+
+Installation
+------------
+
+Follow [these instructions][install] to install and run an Accumulo binary distribution.
 
 Documentation
 -------------
 
-Accumulo has the following documentation which is viewable on the [Accumulo website][1]
+Accumulo has the following documentation which is viewable on the [Accumulo website][accumulo]
 using the links below:
 
-* [User Manual][10] - In-depth developer and administrator documentation.
-* [Examples][11] - Code with corresponding README files that give step by step
+* [User Manual][man-web] - In-depth developer and administrator documentation.
+* [Examples][ex-web] - Code with corresponding README files that give step by step
 instructions for running the example.
 
 This documentation can also be found in Accumulo distributions:
@@ -45,15 +47,15 @@
 * **Binary distribution** - The User Manual can be found in the `docs` directory.  The
 Examples Readmes can be found in `docs/examples`. While the source for the Examples is
 not included, the distribution has a jar with the compiled examples. This makes it easy
-to run them after following the [install][2] instructions.
+to run them after following the [install] instructions.
 
-* **Source distribution** - The [Example Source][14], [Example Readmes][15], and
-[User Manual Source][16] can all be found in the source distribution.
+* **Source distribution** - The [Example Source][ex-src], [Example Readmes][rm-src], and
+[User Manual Source][man-src] are available.
 
 Building
 --------
 
-Accumulo uses [Maven][9] to compile, [test][3], and package its source.  The
+Accumulo uses [Maven] to compile, [test], and package its source.  The
 following command will build the binary tar.gz from source.  Note, these
 instructions will not work for the Accumulo binary distribution as it does not
 include source.  If you just want to build without waiting for the tests to
@@ -84,8 +86,8 @@
 the above packages are *not* considered public API.
 
 The following regex matches imports that are *not* Accumulo public API.  This
-regex can be used with [RegexpSingleline][13] to automatically find suspicious
-imports in a project using Accumulo.
+regex can be used with [RegexpSingleline][regex] to automatically find
+suspicious imports in a project using Accumulo.
 
 ```
 import\s+org\.apache\.accumulo\.(.*\.(impl|thrift|crypto)\..*|(?!core|minicluster).*|core\.(?!client|data|security).*)
@@ -93,7 +95,7 @@
 
 The Accumulo project maintains binary compatibility across this API within a
 major release, as defined in the Java Language Specification 3rd ed. Starting
-with Accumulo 1.6.2 and 1.7.0 all API changes will follow [semver 2.0][12]
+with Accumulo 1.6.2 and 1.7.0 all API changes will follow [semver 2.0][semver]
 
 Export Control
 --------------
@@ -125,21 +127,30 @@
 more details on bouncycastle's cryptography features.
 
 
-[1]: http://accumulo.apache.org
-[2]: INSTALL.md
-[3]: TESTING.md
-[4]: https://research.google.com/archive/bigtable.html
-[5]: https://hadoop.apache.org
-[6]: https://zookeeper.apache.org
-[7]: https://thrift.apache.org
-[8]: https://accumulo.apache.org/notable_features
-[9]: https://maven.apache.org
-[10]: https://accumulo.apache.org/latest/accumulo_user_manual
-[11]: https://accumulo.apache.org/latest/examples
-[12]: http://semver.org/spec/v2.0.0
-[13]: http://checkstyle.sourceforge.net/config_regexp.html
-[14]: examples/simple/src/main/java/org/apache/accumulo/examples/simple
-[15]: docs/src/main/resources/examples
-[16]: docs/src/main/asciidoc
+[accumulo]: https://accumulo.apache.org
+[logo]: contrib/accumulo-logo.png
+[install]: INSTALL.md
+[test]: TESTING.md
+[BigTable]: https://research.google.com/archive/bigtable.html
+[Hadoop]: https://hadoop.apache.org
+[Zookeeper]: https://zookeeper.apache.org
+[Thrift]: https://thrift.apache.org
+[features]: https://accumulo.apache.org/notable_features
+[Maven]: https://maven.apache.org
+[man-web]: https://accumulo.apache.org/latest/accumulo_user_manual
+[ex-web]: https://accumulo.apache.org/latest/examples
+[semver]: http://semver.org/spec/v2.0.0
+[regex]: http://checkstyle.sourceforge.net/config_regexp.html
+[ex-src]: examples/simple/src/main/java/org/apache/accumulo/examples/simple
+[rm-src]: docs/src/main/resources/examples
+[man-src]: docs/src/main/asciidoc
+[li]: https://img.shields.io/badge/license-ASL-blue.svg
+[ll]: https://www.apache.org/licenses/LICENSE-2.0
+[mi]: https://maven-badges.herokuapp.com/maven-central/org.apache.accumulo/accumulo-core/badge.svg
+[ml]: https://maven-badges.herokuapp.com/maven-central/org.apache.accumulo/accumulo-core/
+[ji]: https://javadoc-emblem.rhcloud.com/doc/org.apache.accumulo/accumulo-core/badge.svg
+[jl]: https://www.javadoc.io/doc/org.apache.accumulo/accumulo-core
+[ti]: https://travis-ci.org/apache/accumulo.svg?branch=master
+[tl]: https://travis-ci.org/apache/accumulo
 [java-export]: http://www.oracle.com/us/products/export/export-regulations-345813.html
 [bouncy-faq]: http://www.bouncycastle.org/wiki/display/JA1/Frequently+Asked+Questions
diff --git a/assemble/pom.xml b/assemble/pom.xml
index 022669a..d0f14a9 100644
--- a/assemble/pom.xml
+++ b/assemble/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo</artifactId>
   <packaging>pom</packaging>
@@ -132,11 +132,6 @@
     </dependency>
     <dependency>
       <groupId>org.apache.accumulo</groupId>
-      <artifactId>accumulo-trace</artifactId>
-      <optional>true</optional>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.accumulo</groupId>
       <artifactId>accumulo-tracer</artifactId>
       <optional>true</optional>
     </dependency>
diff --git a/contrib/accumulo-logo.png b/contrib/accumulo-logo.png
new file mode 100644
index 0000000..5b0f6b4
--- /dev/null
+++ b/contrib/accumulo-logo.png
Binary files differ
diff --git a/core/pom.xml b/core/pom.xml
index 5898b09..44afddb 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-core</artifactId>
   <name>Apache Accumulo Core</name>
@@ -40,10 +40,6 @@
       <artifactId>guava</artifactId>
     </dependency>
     <dependency>
-      <groupId>commons-codec</groupId>
-      <artifactId>commons-codec</artifactId>
-    </dependency>
-    <dependency>
       <groupId>commons-collections</groupId>
       <artifactId>commons-collections</artifactId>
     </dependency>
diff --git a/core/src/main/java/org/apache/accumulo/core/Constants.java b/core/src/main/java/org/apache/accumulo/core/Constants.java
index eebd81d..7098d7c 100644
--- a/core/src/main/java/org/apache/accumulo/core/Constants.java
+++ b/core/src/main/java/org/apache/accumulo/core/Constants.java
@@ -22,8 +22,6 @@
 import java.util.Collection;
 import java.util.Collections;
 
-import org.apache.accumulo.core.security.Authorizations;
-
 public class Constants {
 
   public static final String VERSION = FilteredConstants.VERSION;
@@ -109,12 +107,6 @@
   // Security configuration
   public static final String PW_HASH_ALGORITHM = "SHA-256";
 
-  /**
-   * @deprecated since 1.6.0; Use {@link Authorizations#EMPTY} instead
-   */
-  @Deprecated
-  public static final Authorizations NO_AUTHS = Authorizations.EMPTY;
-
   public static final int MAX_DATA_TO_PRINT = 64;
   public static final String CORE_PACKAGE_NAME = "org.apache.accumulo.core";
   public static final String MAPFILE_EXTENSION = "map";
diff --git a/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java b/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java
index c07c4cb..d78c1b5 100644
--- a/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java
+++ b/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java
@@ -24,6 +24,7 @@
 import java.util.Map;
 import java.util.Map.Entry;
 import java.util.UUID;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloException;
@@ -44,7 +45,6 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.ColumnVisibility;
 import org.apache.accumulo.core.trace.Trace;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.volume.VolumeConfiguration;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.hadoop.conf.Configuration;
@@ -56,7 +56,6 @@
 import com.beust.jcommander.DynamicParameter;
 import com.beust.jcommander.IStringConverter;
 import com.beust.jcommander.Parameter;
-import com.google.common.base.Predicate;
 
 public class ClientOpts extends Help {
 
@@ -163,9 +162,6 @@
   @Parameter(names = "--debug", description = "turn on TRACE-level log messages")
   public boolean debug = false;
 
-  @Parameter(names = {"-fake", "--mock"}, description = "Use a mock Instance")
-  public boolean mock = false;
-
   @Parameter(names = "--site-file", description = "Read the given accumulo site file to find the accumulo instance")
   public String siteFile = null;
 
@@ -259,8 +255,6 @@
   synchronized public Instance getInstance() {
     if (cachedInstance != null)
       return cachedInstance;
-    if (mock)
-      return cachedInstance = DeprecationUtil.makeMockInstance(instance);
     return cachedInstance = new ZooKeeperInstance(this.getClientConfiguration());
   }
 
@@ -344,10 +338,10 @@
         @Override
         public void getProperties(Map<String,String> props, Predicate<String> filter) {
           for (Entry<String,String> prop : DefaultConfiguration.getInstance())
-            if (filter.apply(prop.getKey()))
+            if (filter.test(prop.getKey()))
               props.put(prop.getKey(), prop.getValue());
           for (Entry<String,String> prop : xml)
-            if (filter.apply(prop.getKey()))
+            if (filter.test(prop.getKey()))
               props.put(prop.getKey(), prop.getValue());
         }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/ClientConfiguration.java b/core/src/main/java/org/apache/accumulo/core/client/ClientConfiguration.java
index 1b9b380..09d6b42 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ClientConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ClientConfiguration.java
@@ -97,11 +97,8 @@
     private PropertyType type;
     private String description;
 
-    private Property accumuloProperty = null;
-
     private ClientProperty(Property prop) {
       this(prop.getKey(), prop.getDefaultValue(), prop.getType(), prop.getDescription());
-      accumuloProperty = prop;
     }
 
     private ClientProperty(String key, String defaultValue, PropertyType type, String description) {
@@ -119,11 +116,7 @@
       return defaultValue;
     }
 
-    /**
-     * @deprecated since 1.7.0 This method returns a type that is not part of the public API and not guaranteed to be stable.
-     */
-    @Deprecated
-    public PropertyType getType() {
+    private PropertyType getType() {
       return type;
     }
 
@@ -131,14 +124,6 @@
       return description;
     }
 
-    /**
-     * @deprecated since 1.7.0 This method returns a type that is not part of the public API and not guaranteed to be stable.
-     */
-    @Deprecated
-    public Property getAccumuloProperty() {
-      return accumuloProperty;
-    }
-
     public static ClientProperty getPropertyByKey(String key) {
       for (ClientProperty prop : ClientProperty.values())
         if (prop.getKey().equals(key))
diff --git a/core/src/main/java/org/apache/accumulo/core/client/ClientSideIteratorScanner.java b/core/src/main/java/org/apache/accumulo/core/client/ClientSideIteratorScanner.java
index d4622c6..978d85a 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ClientSideIteratorScanner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ClientSideIteratorScanner.java
@@ -65,21 +65,6 @@
   private long readaheadThreshold = Constants.SCANNER_DEFAULT_READAHEAD_THRESHOLD;
   private SamplerConfiguration iteratorSamplerConfig;
 
-  /**
-   * @deprecated since 1.7.0 was never intended for public use. However this could have been used by anything extending this class.
-   */
-  @Deprecated
-  public class ScannerTranslator extends ScannerTranslatorImpl {
-    public ScannerTranslator(Scanner scanner) {
-      super(scanner, scanner.getSamplerConfiguration());
-    }
-
-    @Override
-    public SortedKeyValueIterator<Key,Value> deepCopy(final IteratorEnvironment env) {
-      return new ScannerTranslator(scanner);
-    }
-  }
-
   private class ClientSideIteratorEnvironment implements IteratorEnvironment {
 
     private SamplerConfiguration samplerConfig;
@@ -284,24 +269,6 @@
     return smi.scanner.getAuthorizations();
   }
 
-  @Deprecated
-  @Override
-  public void setTimeOut(int timeOut) {
-    if (timeOut == Integer.MAX_VALUE)
-      setTimeout(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
-    else
-      setTimeout(timeOut, TimeUnit.SECONDS);
-  }
-
-  @Deprecated
-  @Override
-  public int getTimeOut() {
-    long timeout = getTimeout(TimeUnit.SECONDS);
-    if (timeout >= Integer.MAX_VALUE)
-      return Integer.MAX_VALUE;
-    return (int) timeout;
-  }
-
   @Override
   public void setRange(final Range range) {
     this.range = range;
diff --git a/core/src/main/java/org/apache/accumulo/core/client/Connector.java b/core/src/main/java/org/apache/accumulo/core/client/Connector.java
index e36cc82..95cfd10 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/Connector.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/Connector.java
@@ -50,33 +50,6 @@
   public abstract BatchScanner createBatchScanner(String tableName, Authorizations authorizations, int numQueryThreads) throws TableNotFoundException;
 
   /**
-   * Factory method to create a BatchDeleter connected to Accumulo.
-   *
-   * @param tableName
-   *          the name of the table to query and delete from
-   * @param authorizations
-   *          A set of authorization labels that will be checked against the column visibility of each key in order to filter data. The authorizations passed in
-   *          must be a subset of the accumulo user's set of authorizations. If the accumulo user has authorizations (A1, A2) and authorizations (A2, A3) are
-   *          passed, then an exception will be thrown.
-   * @param numQueryThreads
-   *          the number of concurrent threads to spawn for querying
-   * @param maxMemory
-   *          size in bytes of the maximum memory to batch before writing
-   * @param maxLatency
-   *          size in milliseconds; set to 0 or Long.MAX_VALUE to allow the maximum time to hold a batch before writing
-   * @param maxWriteThreads
-   *          the maximum number of threads to use for writing data to the tablet servers
-   *
-   * @return BatchDeleter object for configuring and deleting
-   * @throws TableNotFoundException
-   *           when the specified table doesn't exist
-   * @deprecated since 1.5.0; Use {@link #createBatchDeleter(String, Authorizations, int, BatchWriterConfig)} instead.
-   */
-  @Deprecated
-  public abstract BatchDeleter createBatchDeleter(String tableName, Authorizations authorizations, int numQueryThreads, long maxMemory, long maxLatency,
-      int maxWriteThreads) throws TableNotFoundException;
-
-  /**
    *
    * @param tableName
    *          the name of the table to query and delete from
@@ -91,7 +64,6 @@
    * @return BatchDeleter object for configuring and deleting
    * @since 1.5.0
    */
-
   public abstract BatchDeleter createBatchDeleter(String tableName, Authorizations authorizations, int numQueryThreads, BatchWriterConfig config)
       throws TableNotFoundException;
 
@@ -100,52 +72,14 @@
    *
    * @param tableName
    *          the name of the table to insert data into
-   * @param maxMemory
-   *          size in bytes of the maximum memory to batch before writing
-   * @param maxLatency
-   *          time in milliseconds; set to 0 or Long.MAX_VALUE to allow the maximum time to hold a batch before writing
-   * @param maxWriteThreads
-   *          the maximum number of threads to use for writing data to the tablet servers
-   *
-   * @return BatchWriter object for configuring and writing data to
-   * @throws TableNotFoundException
-   *           when the specified table doesn't exist
-   * @deprecated since 1.5.0; Use {@link #createBatchWriter(String, BatchWriterConfig)} instead.
-   */
-  @Deprecated
-  public abstract BatchWriter createBatchWriter(String tableName, long maxMemory, long maxLatency, int maxWriteThreads) throws TableNotFoundException;
-
-  /**
-   * Factory method to create a BatchWriter connected to Accumulo.
-   *
-   * @param tableName
-   *          the name of the table to insert data into
    * @param config
    *          configuration used to create batch writer
    * @return BatchWriter object for configuring and writing data to
    * @since 1.5.0
    */
-
   public abstract BatchWriter createBatchWriter(String tableName, BatchWriterConfig config) throws TableNotFoundException;
 
   /**
-   * Factory method to create a Multi-Table BatchWriter connected to Accumulo. Multi-table batch writers can queue data for multiple tables, which is good for
-   * ingesting data into multiple tables from the same source
-   *
-   * @param maxMemory
-   *          size in bytes of the maximum memory to batch before writing
-   * @param maxLatency
-   *          size in milliseconds; set to 0 or Long.MAX_VALUE to allow the maximum time to hold a batch before writing
-   * @param maxWriteThreads
-   *          the maximum number of threads to use for writing data to the tablet servers
-   *
-   * @return MultiTableBatchWriter object for configuring and writing data to
-   * @deprecated since 1.5.0; Use {@link #createMultiTableBatchWriter(BatchWriterConfig)} instead.
-   */
-  @Deprecated
-  public abstract MultiTableBatchWriter createMultiTableBatchWriter(long maxMemory, long maxLatency, int maxWriteThreads);
-
-  /**
    * Factory method to create a Multi-Table BatchWriter connected to Accumulo. Multi-table batch writers can queue data for multiple tables. Also data for
    * multiple tables can be sent to a server in a single batch. Its an efficient way to ingest data into multiple tables from a single process.
    *
@@ -154,7 +88,6 @@
    * @return MultiTableBatchWriter object for configuring and writing data to
    * @since 1.5.0
    */
-
   public abstract MultiTableBatchWriter createMultiTableBatchWriter(BatchWriterConfig config);
 
   /**
diff --git a/core/src/main/java/org/apache/accumulo/core/client/Instance.java b/core/src/main/java/org/apache/accumulo/core/client/Instance.java
index 8a70d4c..3cb8973 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/Instance.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/Instance.java
@@ -16,13 +16,10 @@
  */
 package org.apache.accumulo.core.client;
 
-import java.nio.ByteBuffer;
 import java.util.List;
 
-import org.apache.accumulo.core.client.admin.InstanceOperations;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
 
 /**
  * This class represents the information a client needs to know to connect to an instance of accumulo.
@@ -72,78 +69,6 @@
   int getZooKeepersSessionTimeOut();
 
   /**
-   * Returns a connection to accumulo.
-   *
-   * @param user
-   *          a valid accumulo user
-   * @param pass
-   *          A UTF-8 encoded password. The password may be cleared after making this call.
-   * @return the accumulo Connector
-   * @throws AccumuloException
-   *           when a generic exception occurs
-   * @throws AccumuloSecurityException
-   *           when a user's credentials are invalid
-   * @deprecated since 1.5, use {@link #getConnector(String, AuthenticationToken)} with {@link PasswordToken}
-   */
-  @Deprecated
-  Connector getConnector(String user, byte[] pass) throws AccumuloException, AccumuloSecurityException;
-
-  /**
-   * Returns a connection to accumulo.
-   *
-   * @param user
-   *          a valid accumulo user
-   * @param pass
-   *          A UTF-8 encoded password. The password may be cleared after making this call.
-   * @return the accumulo Connector
-   * @throws AccumuloException
-   *           when a generic exception occurs
-   * @throws AccumuloSecurityException
-   *           when a user's credentials are invalid
-   * @deprecated since 1.5, use {@link #getConnector(String, AuthenticationToken)} with {@link PasswordToken}
-   */
-  @Deprecated
-  Connector getConnector(String user, ByteBuffer pass) throws AccumuloException, AccumuloSecurityException;
-
-  /**
-   * Returns a connection to this instance of accumulo.
-   *
-   * @param user
-   *          a valid accumulo user
-   * @param pass
-   *          If a mutable CharSequence is passed in, it may be cleared after this call.
-   * @return the accumulo Connector
-   * @throws AccumuloException
-   *           when a generic exception occurs
-   * @throws AccumuloSecurityException
-   *           when a user's credentials are invalid
-   * @deprecated since 1.5, use {@link #getConnector(String, AuthenticationToken)} with {@link PasswordToken}
-   */
-  @Deprecated
-  Connector getConnector(String user, CharSequence pass) throws AccumuloException, AccumuloSecurityException;
-
-  /**
-   * Returns the AccumuloConfiguration to use when interacting with this instance.
-   *
-   * @return the AccumuloConfiguration that specifies properties related to interacting with this instance
-   * @deprecated since 1.6.0. This method makes very little sense in the context of the client API and never should have been exposed.
-   * @see InstanceOperations#getSystemConfiguration() for client-side reading of the server-side configuration.
-   */
-  @Deprecated
-  AccumuloConfiguration getConfiguration();
-
-  /**
-   * Set the AccumuloConfiguration to use when interacting with this instance.
-   *
-   * @param conf
-   *          accumulo configuration
-   * @deprecated since 1.6.0. This method makes very little sense in the context of the client API and never should have been exposed.
-   * @see InstanceOperations#setProperty(String, String)
-   */
-  @Deprecated
-  void setConfiguration(AccumuloConfiguration conf);
-
-  /**
    * Returns a connection to this instance of accumulo.
    *
    * @param principal
diff --git a/core/src/main/java/org/apache/accumulo/core/client/IsolatedScanner.java b/core/src/main/java/org/apache/accumulo/core/client/IsolatedScanner.java
index 90e8637..164cb1c 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/IsolatedScanner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/IsolatedScanner.java
@@ -238,24 +238,6 @@
     return new RowBufferingIterator(scanner, this, range, timeOut, batchSize, readaheadThreshold, bufferFactory);
   }
 
-  @Deprecated
-  @Override
-  public void setTimeOut(int timeOut) {
-    if (timeOut == Integer.MAX_VALUE)
-      setTimeout(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
-    else
-      setTimeout(timeOut, TimeUnit.SECONDS);
-  }
-
-  @Deprecated
-  @Override
-  public int getTimeOut() {
-    long timeout = getTimeout(TimeUnit.SECONDS);
-    if (timeout >= Integer.MAX_VALUE)
-      return Integer.MAX_VALUE;
-    return (int) timeout;
-  }
-
   @Override
   public void setRange(Range range) {
     this.range = range;
diff --git a/core/src/main/java/org/apache/accumulo/core/client/MutationsRejectedException.java b/core/src/main/java/org/apache/accumulo/core/client/MutationsRejectedException.java
index 676957a..8f8720a 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/MutationsRejectedException.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/MutationsRejectedException.java
@@ -16,7 +16,6 @@
  */
 package org.apache.accumulo.core.client;
 
-import java.util.ArrayList;
 import java.util.Collection;
 import java.util.HashMap;
 import java.util.HashSet;
@@ -29,10 +28,6 @@
 import org.apache.accumulo.core.client.security.SecurityErrorCode;
 import org.apache.accumulo.core.data.ConstraintViolationSummary;
 import org.apache.accumulo.core.data.TabletId;
-import org.apache.accumulo.core.data.impl.TabletIdImpl;
-
-import com.google.common.base.Function;
-import com.google.common.collect.Collections2;
 
 /**
  * Communicate the failed mutations of a BatchWriter back to the client.
@@ -46,61 +41,6 @@
   private Collection<String> es;
   private int unknownErrors;
 
-  private static <K,V,L> Map<L,V> transformKeys(Map<K,V> map, Function<K,L> keyFunction) {
-    HashMap<L,V> ret = new HashMap<>();
-    for (Entry<K,V> entry : map.entrySet()) {
-      ret.put(keyFunction.apply(entry.getKey()), entry.getValue());
-    }
-
-    return ret;
-  }
-
-  /**
-   * @param cvsList
-   *          list of constraint violations
-   * @param hashMap
-   *          authorization failures
-   * @param serverSideErrors
-   *          server side errors
-   * @param unknownErrors
-   *          number of unknown errors
-   *
-   * @deprecated since 1.6.0, see {@link #MutationsRejectedException(Instance, List, Map, Collection, int, Throwable)}
-   */
-  @Deprecated
-  public MutationsRejectedException(List<ConstraintViolationSummary> cvsList, HashMap<org.apache.accumulo.core.data.KeyExtent,Set<SecurityErrorCode>> hashMap,
-      Collection<String> serverSideErrors, int unknownErrors, Throwable cause) {
-    super("# constraint violations : " + cvsList.size() + "  security codes: " + hashMap.values() + "  # server errors " + serverSideErrors.size()
-        + " # exceptions " + unknownErrors, cause);
-    this.cvsl = cvsList;
-    this.af = transformKeys(hashMap, TabletIdImpl.KE_2_TID_OLD);
-    this.es = serverSideErrors;
-    this.unknownErrors = unknownErrors;
-  }
-
-  /**
-   * @param cvsList
-   *          list of constraint violations
-   * @param hashMap
-   *          authorization failures
-   * @param serverSideErrors
-   *          server side errors
-   * @param unknownErrors
-   *          number of unknown errors
-   *
-   * @deprecated since 1.7.0 see {@link #MutationsRejectedException(Instance, List, Map, Collection, int, Throwable)}
-   */
-  @Deprecated
-  public MutationsRejectedException(Instance instance, List<ConstraintViolationSummary> cvsList,
-      HashMap<org.apache.accumulo.core.data.KeyExtent,Set<SecurityErrorCode>> hashMap, Collection<String> serverSideErrors, int unknownErrors, Throwable cause) {
-    super("# constraint violations : " + cvsList.size() + "  security codes: " + format(transformKeys(hashMap, TabletIdImpl.KE_2_TID_OLD), instance)
-        + "  # server errors " + serverSideErrors.size() + " # exceptions " + unknownErrors, cause);
-    this.cvsl = cvsList;
-    this.af = transformKeys(hashMap, TabletIdImpl.KE_2_TID_OLD);
-    this.es = serverSideErrors;
-    this.unknownErrors = unknownErrors;
-  }
-
   /**
    *
    * @param cvsList
@@ -148,25 +88,6 @@
   }
 
   /**
-   * @return the internal list of authorization failures
-   * @deprecated since 1.5, see {@link #getAuthorizationFailuresMap()}
-   */
-  @Deprecated
-  public List<org.apache.accumulo.core.data.KeyExtent> getAuthorizationFailures() {
-    return new ArrayList<>(Collections2.transform(af.keySet(), TabletIdImpl.TID_2_KE_OLD));
-  }
-
-  /**
-   * @return the internal mapping of keyextent mappings to SecurityErrorCode
-   * @since 1.5.0
-   * @deprecated since 1.7.0 see {@link #getSecurityErrorCodes()}
-   */
-  @Deprecated
-  public Map<org.apache.accumulo.core.data.KeyExtent,Set<SecurityErrorCode>> getAuthorizationFailuresMap() {
-    return transformKeys(af, TabletIdImpl.TID_2_KE_OLD);
-  }
-
-  /**
    * @return the internal mapping of TabletID to SecurityErrorCodes
    */
   public Map<TabletId,Set<SecurityErrorCode>> getSecurityErrorCodes() {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/Scanner.java b/core/src/main/java/org/apache/accumulo/core/client/Scanner.java
index 372ee42..547c89c 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/Scanner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/Scanner.java
@@ -28,25 +28,6 @@
 public interface Scanner extends ScannerBase {
 
   /**
-   * This setting determines how long a scanner will automatically retry when a failure occurs. By default a scanner will retry forever.
-   *
-   * @param timeOut
-   *          in seconds
-   * @deprecated Since 1.5. See {@link ScannerBase#setTimeout(long, java.util.concurrent.TimeUnit)}
-   */
-  @Deprecated
-  void setTimeOut(int timeOut);
-
-  /**
-   * Returns the setting for how long a scanner will automatically retry when a failure occurs.
-   *
-   * @return the timeout configured for this scanner
-   * @deprecated Since 1.5. See {@link ScannerBase#getTimeout(java.util.concurrent.TimeUnit)}
-   */
-  @Deprecated
-  int getTimeOut();
-
-  /**
    * Sets the range of keys to scan over.
    *
    * @param range
diff --git a/core/src/main/java/org/apache/accumulo/core/client/ZooKeeperInstance.java b/core/src/main/java/org/apache/accumulo/core/client/ZooKeeperInstance.java
index 4a4dd5f..a545c72 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ZooKeeperInstance.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ZooKeeperInstance.java
@@ -19,7 +19,6 @@
 import static com.google.common.base.Preconditions.checkArgument;
 import static java.nio.charset.StandardCharsets.UTF_8;
 
-import java.nio.ByteBuffer;
 import java.util.Collections;
 import java.util.List;
 import java.util.UUID;
@@ -32,18 +31,13 @@
 import org.apache.accumulo.core.client.impl.Credentials;
 import org.apache.accumulo.core.client.impl.InstanceOperationsImpl;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.DefaultConfiguration;
 import org.apache.accumulo.core.metadata.RootTable;
-import org.apache.accumulo.core.util.ByteBufferUtil;
 import org.apache.accumulo.core.util.OpTimer;
-import org.apache.accumulo.core.util.TextUtil;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.accumulo.fate.zookeeper.ZooCacheFactory;
 import org.apache.commons.configuration.Configuration;
-import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -73,7 +67,6 @@
 
   private final int zooKeepersSessionTimeOut;
 
-  private AccumuloConfiguration conf;
   private ClientConfiguration clientConf;
 
   /**
@@ -88,49 +81,6 @@
   }
 
   /**
-   *
-   * @param instanceName
-   *          The name of specific accumulo instance. This is set at initialization time.
-   * @param zooKeepers
-   *          A comma separated list of zoo keeper server locations. Each location can contain an optional port, of the format host:port.
-   * @param sessionTimeout
-   *          zoo keeper session time out in milliseconds.
-   * @deprecated since 1.6.0; Use {@link #ZooKeeperInstance(Configuration)} instead.
-   */
-  @Deprecated
-  public ZooKeeperInstance(String instanceName, String zooKeepers, int sessionTimeout) {
-    this(ClientConfiguration.loadDefault().withInstance(instanceName).withZkHosts(zooKeepers).withZkTimeout(sessionTimeout));
-  }
-
-  /**
-   *
-   * @param instanceId
-   *          The UUID that identifies the accumulo instance you want to connect to.
-   * @param zooKeepers
-   *          A comma separated list of zoo keeper server locations. Each location can contain an optional port, of the format host:port.
-   * @deprecated since 1.6.0; Use {@link #ZooKeeperInstance(Configuration)} instead.
-   */
-  @Deprecated
-  public ZooKeeperInstance(UUID instanceId, String zooKeepers) {
-    this(ClientConfiguration.loadDefault().withInstance(instanceId).withZkHosts(zooKeepers));
-  }
-
-  /**
-   *
-   * @param instanceId
-   *          The UUID that identifies the accumulo instance you want to connect to.
-   * @param zooKeepers
-   *          A comma separated list of zoo keeper server locations. Each location can contain an optional port, of the format host:port.
-   * @param sessionTimeout
-   *          zoo keeper session time out in milliseconds.
-   * @deprecated since 1.6.0; Use {@link #ZooKeeperInstance(Configuration)} instead.
-   */
-  @Deprecated
-  public ZooKeeperInstance(UUID instanceId, String zooKeepers, int sessionTimeout) {
-    this(ClientConfiguration.loadDefault().withInstance(instanceId).withZkHosts(zooKeepers).withZkTimeout(sessionTimeout));
-  }
-
-  /**
    * @param config
    *          Client configuration for specifying connection options. See {@link ClientConfiguration} which extends Configuration with convenience methods
    *          specific to Accumulo.
@@ -254,52 +204,11 @@
   }
 
   @Override
-  @Deprecated
-  public Connector getConnector(String user, CharSequence pass) throws AccumuloException, AccumuloSecurityException {
-    return getConnector(user, TextUtil.getBytes(new Text(pass.toString())));
-  }
-
-  @Override
-  @Deprecated
-  public Connector getConnector(String user, ByteBuffer pass) throws AccumuloException, AccumuloSecurityException {
-    return getConnector(user, ByteBufferUtil.toBytes(pass));
-  }
-
-  @Override
   public Connector getConnector(String principal, AuthenticationToken token) throws AccumuloException, AccumuloSecurityException {
     return new ConnectorImpl(new ClientContext(this, new Credentials(principal, token), clientConf));
   }
 
   @Override
-  @Deprecated
-  public Connector getConnector(String principal, byte[] pass) throws AccumuloException, AccumuloSecurityException {
-    return getConnector(principal, new PasswordToken(pass));
-  }
-
-  @Override
-  @Deprecated
-  public AccumuloConfiguration getConfiguration() {
-    return conf = conf == null ? DefaultConfiguration.getInstance() : ClientContext.convertClientConfig(clientConf);
-  }
-
-  @Override
-  @Deprecated
-  public void setConfiguration(AccumuloConfiguration conf) {
-    this.conf = conf;
-  }
-
-  /**
-   * Given a zooCache and instanceId, look up the instance name.
-   *
-   * @deprecated since 1.7.0 {@link ZooCache} is not part of the public API, but its a parameter to this method. Therefore code that uses this method is not
-   *             guaranteed to be stable. This method was deprecated to discourage its use.
-   */
-  @Deprecated
-  public static String lookupInstanceName(ZooCache zooCache, UUID instanceId) {
-    return InstanceOperationsImpl.lookupInstanceName(zooCache, instanceId);
-  }
-
-  @Override
   public String toString() {
     StringBuilder sb = new StringBuilder(64);
     sb.append("ZooKeeperInstance: ").append(getInstanceName()).append(" ").append(getZooKeepers());
diff --git a/core/src/main/java/org/apache/accumulo/core/client/admin/ActiveCompaction.java b/core/src/main/java/org/apache/accumulo/core/client/admin/ActiveCompaction.java
index 5228391..ddb7fa9 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/admin/ActiveCompaction.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/admin/ActiveCompaction.java
@@ -78,13 +78,6 @@
 
   /**
    * @return tablet thats is compacting
-   * @deprecated since 1.7.0 use {@link #getTablet()}
-   */
-  @Deprecated
-  public abstract org.apache.accumulo.core.data.KeyExtent getExtent();
-
-  /**
-   * @return tablet thats is compacting
    * @since 1.7.0
    */
   public abstract TabletId getTablet();
diff --git a/core/src/main/java/org/apache/accumulo/core/client/admin/ActiveScan.java b/core/src/main/java/org/apache/accumulo/core/client/admin/ActiveScan.java
index 81bb1cc..9510895 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/admin/ActiveScan.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/admin/ActiveScan.java
@@ -65,13 +65,6 @@
 
   /**
    * @return tablet the scan is running against, if a batch scan may be one of many or null
-   * @deprecated since 1.7.0 use {@link #getTablet()}
-   */
-  @Deprecated
-  public abstract org.apache.accumulo.core.data.KeyExtent getExtent();
-
-  /**
-   * @return tablet the scan is running against, if a batch scan may be one of many or null
    * @since 1.7.0
    */
   public abstract TabletId getTablet();
diff --git a/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java b/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java
index cb916ef..21d507b 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java
@@ -36,24 +36,6 @@
   /**
    * Create a user
    *
-   * @param user
-   *          the name of the user to create
-   * @param password
-   *          the plaintext password for the user
-   * @param authorizations
-   *          the authorizations that the user has for scanning
-   * @throws AccumuloException
-   *           if a general error occurs
-   * @throws AccumuloSecurityException
-   *           if the user does not have permission to create a user
-   * @deprecated since 1.5.0; use {@link #createLocalUser(String, PasswordToken)} or the user management functions of your configured authenticator instead.
-   */
-  @Deprecated
-  void createUser(String user, byte[] password, Authorizations authorizations) throws AccumuloException, AccumuloSecurityException;
-
-  /**
-   * Create a user
-   *
    * @param principal
    *          the name of the user to create
    * @param password
@@ -69,20 +51,6 @@
   /**
    * Delete a user
    *
-   * @param user
-   *          the user name to delete
-   * @throws AccumuloException
-   *           if a general error occurs
-   * @throws AccumuloSecurityException
-   *           if the user does not have permission to delete a user
-   * @deprecated since 1.5.0; use {@link #dropUser(String)} or the user management functions of your configured authenticator instead.
-   */
-  @Deprecated
-  void dropUser(String user) throws AccumuloException, AccumuloSecurityException;
-
-  /**
-   * Delete a user
-   *
    * @param principal
    *          the user name to delete
    * @throws AccumuloException
@@ -96,23 +64,6 @@
   /**
    * Verify a username/password combination is valid
    *
-   * @param user
-   *          the name of the user to authenticate
-   * @param password
-   *          the plaintext password for the user
-   * @return true if the user asking is allowed to know and the specified user/password is valid, false otherwise
-   * @throws AccumuloException
-   *           if a general error occurs
-   * @throws AccumuloSecurityException
-   *           if the user does not have permission to ask
-   * @deprecated since 1.5.0; use {@link #authenticateUser(String, AuthenticationToken)} instead.
-   */
-  @Deprecated
-  boolean authenticateUser(String user, byte[] password) throws AccumuloException, AccumuloSecurityException;
-
-  /**
-   * Verify a username/password combination is valid
-   *
    * @param principal
    *          the name of the user to authenticate
    * @param token
@@ -129,23 +80,6 @@
   /**
    * Set the user's password
    *
-   * @param user
-   *          the name of the user to modify
-   * @param password
-   *          the plaintext password for the user
-   * @throws AccumuloException
-   *           if a general error occurs
-   * @throws AccumuloSecurityException
-   *           if the user does not have permission to modify a user
-   * @deprecated since 1.5.0; use {@link #changeLocalUserPassword(String, PasswordToken)} or the user management functions of your configured authenticator
-   *             instead.
-   */
-  @Deprecated
-  void changeUserPassword(String user, byte[] password) throws AccumuloException, AccumuloSecurityException;
-
-  /**
-   * Set the user's password
-   *
    * @param principal
    *          the name of the user to modify
    * @param token
@@ -334,19 +268,6 @@
    *           if a general error occurs
    * @throws AccumuloSecurityException
    *           if the user does not have permission to query users
-   * @deprecated since 1.5.0; use {@link #listLocalUsers()} or the user management functions of your configured authenticator instead.
-   */
-  @Deprecated
-  Set<String> listUsers() throws AccumuloException, AccumuloSecurityException;
-
-  /**
-   * Return a list of users in accumulo
-   *
-   * @return a set of user names
-   * @throws AccumuloException
-   *           if a general error occurs
-   * @throws AccumuloSecurityException
-   *           if the user does not have permission to query users
    * @since 1.5.0
    */
   Set<String> listLocalUsers() throws AccumuloException, AccumuloSecurityException;
diff --git a/core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java b/core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java
index 3e56736..a6046c6 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java
@@ -79,40 +79,6 @@
   /**
    * @param tableName
    *          the name of the table
-   * @param limitVersion
-   *          Enables/disables the versioning iterator, which will limit the number of Key versions kept.
-   * @throws AccumuloException
-   *           if a general error occurs
-   * @throws AccumuloSecurityException
-   *           if the user does not have permission
-   * @throws TableExistsException
-   *           if the table already exists
-   * @deprecated since 1.7.0; use {@link #create(String, NewTableConfiguration)} instead.
-   */
-  @Deprecated
-  void create(String tableName, boolean limitVersion) throws AccumuloException, AccumuloSecurityException, TableExistsException;
-
-  /**
-   * @param tableName
-   *          the name of the table
-   * @param versioningIter
-   *          Enables/disables the versioning iterator, which will limit the number of Key versions kept.
-   * @param timeType
-   *          specifies logical or real-time based time recording for entries in the table
-   * @throws AccumuloException
-   *           if a general error occurs
-   * @throws AccumuloSecurityException
-   *           if the user does not have permission
-   * @throws TableExistsException
-   *           if the table already exists
-   * @deprecated since 1.7.0; use {@link #create(String, NewTableConfiguration)} instead.
-   */
-  @Deprecated
-  void create(String tableName, boolean versioningIter, TimeType timeType) throws AccumuloException, AccumuloSecurityException, TableExistsException;
-
-  /**
-   * @param tableName
-   *          the name of the table
    * @param ntc
    *          specifies the new table's configuration variable, which are: 1. enable/disable the versioning iterator, which will limit the number of Key
    *          versions kept; 2. specifies logical or real-time based time recording for entries in the table; 3. user defined properties to be merged into the
@@ -190,17 +156,6 @@
    * @return the split points (end-row names) for the table's current split profile
    * @throws TableNotFoundException
    *           if the table does not exist
-   * @deprecated since 1.5.0; use {@link #listSplits(String)} instead.
-   */
-  @Deprecated
-  Collection<Text> getSplits(String tableName) throws TableNotFoundException;
-
-  /**
-   * @param tableName
-   *          the name of the table
-   * @return the split points (end-row names) for the table's current split profile
-   * @throws TableNotFoundException
-   *           if the table does not exist
    * @throws AccumuloException
    *           if a general error occurs
    * @throws AccumuloSecurityException
@@ -214,17 +169,6 @@
    *          the name of the table
    * @param maxSplits
    *          specifies the maximum number of splits to return
-   * @return the split points (end-row names) for the table's current split profile, grouped into fewer splits so as not to exceed maxSplits
-   * @deprecated since 1.5.0; use {@link #listSplits(String, int)} instead.
-   */
-  @Deprecated
-  Collection<Text> getSplits(String tableName, int maxSplits) throws TableNotFoundException;
-
-  /**
-   * @param tableName
-   *          the name of the table
-   * @param maxSplits
-   *          specifies the maximum number of splits to return
    * @throws AccumuloException
    *           if a general error occurs
    * @throws AccumuloSecurityException
@@ -416,21 +360,6 @@
   void rename(String oldTableName, String newTableName) throws AccumuloSecurityException, TableNotFoundException, AccumuloException, TableExistsException;
 
   /**
-   * Initiate a flush of a table's data that is in memory
-   *
-   * @param tableName
-   *          the name of the table
-   * @throws AccumuloException
-   *           if a general error occurs
-   * @throws AccumuloSecurityException
-   *           if the user does not have permission
-   *
-   * @deprecated since 1.4; use {@link #flush(String, Text, Text, boolean)} instead
-   */
-  @Deprecated
-  void flush(String tableName) throws AccumuloException, AccumuloSecurityException;
-
-  /**
    * Flush a table's data that is currently in memory.
    *
    * @param tableName
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveCompactionImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveCompactionImpl.java
index bdd5d51..a34bcf7 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveCompactionImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveCompactionImpl.java
@@ -28,7 +28,6 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.data.impl.TabletIdImpl;
 import org.apache.accumulo.core.data.thrift.IterInfo;
-import org.apache.hadoop.io.Text;
 
 /**
  *
@@ -50,14 +49,6 @@
   }
 
   @Override
-  @Deprecated
-  public org.apache.accumulo.core.data.KeyExtent getExtent() {
-    KeyExtent ke = new KeyExtent(tac.getExtent());
-    org.apache.accumulo.core.data.KeyExtent oke = new org.apache.accumulo.core.data.KeyExtent(new Text(ke.getTableId()), ke.getEndRow(), ke.getPrevEndRow());
-    return oke;
-  }
-
-  @Override
   public TabletId getTablet() {
     return new TabletIdImpl(new KeyExtent(tac.getExtent()));
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveScanImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveScanImpl.java
index 9021190..dd96aa3 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveScanImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveScanImpl.java
@@ -32,7 +32,6 @@
 import org.apache.accumulo.core.data.thrift.IterInfo;
 import org.apache.accumulo.core.data.thrift.TColumn;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.hadoop.io.Text;
 
 /**
  * A class that contains information about an ActiveScan
@@ -120,12 +119,6 @@
   }
 
   @Override
-  @Deprecated
-  public org.apache.accumulo.core.data.KeyExtent getExtent() {
-    return new org.apache.accumulo.core.data.KeyExtent(new Text(extent.getTableId()), extent.getEndRow(), extent.getPrevEndRow());
-  }
-
-  @Override
   public TabletId getTablet() {
     return new TabletIdImpl(extent);
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ClientContext.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ClientContext.java
index 6828174..b4c2f47 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ClientContext.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ClientContext.java
@@ -22,6 +22,7 @@
 import java.io.IOException;
 import java.util.Iterator;
 import java.util.Map;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -41,8 +42,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Predicate;
-
 /**
  * This class represents any essential configuration and credentials needed to initiate RPC operations throughout the code. It is intended to represent a shared
  * object that contains these things from when the client was first constructed. It is not public API, and is only an internal representation of the context in
@@ -219,7 +218,7 @@
         Iterator<?> keyIter = config.getKeys();
         while (keyIter.hasNext()) {
           String key = keyIter.next().toString();
-          if (filter.apply(key))
+          if (filter.test(key))
             props.put(key, config.getString(key));
         }
 
@@ -227,7 +226,7 @@
         // Automatically reconstruct the server property when converting a client config.
         if (props.containsKey(ClientProperty.KERBEROS_SERVER_PRIMARY.getKey())) {
           final String serverPrimary = props.remove(ClientProperty.KERBEROS_SERVER_PRIMARY.getKey());
-          if (filter.apply(Property.GENERAL_KERBEROS_PRINCIPAL.getKey())) {
+          if (filter.test(Property.GENERAL_KERBEROS_PRINCIPAL.getKey())) {
             // Use the _HOST expansion. It should be unnecessary in "client land".
             props.put(Property.GENERAL_KERBEROS_PRINCIPAL.getKey(), serverPrimary + "/_HOST@" + SaslConnectionParams.getDefaultRealm());
           }
@@ -242,7 +241,7 @@
                 continue;
               }
 
-              if (filter.apply(key)) {
+              if (filter.test(key)) {
                 char[] value = CredentialProviderFactoryShim.getValueFromCredentialProvider(hadoopConf, key);
                 if (null != value) {
                   props.put(key, new String(value));
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java
index 443e548..7cab204 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java
@@ -18,8 +18,6 @@
 
 import static com.google.common.base.Preconditions.checkArgument;
 
-import java.util.concurrent.TimeUnit;
-
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.BatchDeleter;
@@ -96,16 +94,6 @@
     return new TabletServerBatchReader(context, getTableId(tableName), authorizations, numQueryThreads);
   }
 
-  @Deprecated
-  @Override
-  public BatchDeleter createBatchDeleter(String tableName, Authorizations authorizations, int numQueryThreads, long maxMemory, long maxLatency,
-      int maxWriteThreads) throws TableNotFoundException {
-    checkArgument(tableName != null, "tableName is null");
-    checkArgument(authorizations != null, "authorizations is null");
-    return new TabletServerBatchDeleter(context, getTableId(tableName), authorizations, numQueryThreads, new BatchWriterConfig().setMaxMemory(maxMemory)
-        .setMaxLatency(maxLatency, TimeUnit.MILLISECONDS).setMaxWriteThreads(maxWriteThreads));
-  }
-
   @Override
   public BatchDeleter createBatchDeleter(String tableName, Authorizations authorizations, int numQueryThreads, BatchWriterConfig config)
       throws TableNotFoundException {
@@ -114,27 +102,12 @@
     return new TabletServerBatchDeleter(context, getTableId(tableName), authorizations, numQueryThreads, config);
   }
 
-  @Deprecated
-  @Override
-  public BatchWriter createBatchWriter(String tableName, long maxMemory, long maxLatency, int maxWriteThreads) throws TableNotFoundException {
-    checkArgument(tableName != null, "tableName is null");
-    return new BatchWriterImpl(context, getTableId(tableName), new BatchWriterConfig().setMaxMemory(maxMemory).setMaxLatency(maxLatency, TimeUnit.MILLISECONDS)
-        .setMaxWriteThreads(maxWriteThreads));
-  }
-
   @Override
   public BatchWriter createBatchWriter(String tableName, BatchWriterConfig config) throws TableNotFoundException {
     checkArgument(tableName != null, "tableName is null");
     return new BatchWriterImpl(context, getTableId(tableName), config);
   }
 
-  @Deprecated
-  @Override
-  public MultiTableBatchWriter createMultiTableBatchWriter(long maxMemory, long maxLatency, int maxWriteThreads) {
-    return new MultiTableBatchWriterImpl(context, new BatchWriterConfig().setMaxMemory(maxMemory).setMaxLatency(maxLatency, TimeUnit.MILLISECONDS)
-        .setMaxWriteThreads(maxWriteThreads));
-  }
-
   @Override
   public MultiTableBatchWriter createMultiTableBatchWriter(BatchWriterConfig config) {
     return new MultiTableBatchWriterImpl(context, config);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/Credentials.java b/core/src/main/java/org/apache/accumulo/core/client/impl/Credentials.java
index 28a704a..92d1bd3 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/Credentials.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/Credentials.java
@@ -19,6 +19,7 @@
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.nio.ByteBuffer;
+import java.util.Base64;
 
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.Connector;
@@ -27,7 +28,6 @@
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.AuthenticationTokenSerializer;
 import org.apache.accumulo.core.security.thrift.TCredentials;
-import org.apache.accumulo.core.util.Base64;
 
 /**
  * A wrapper for internal use. This class carries the instance, principal, and authentication token for use in the public API, in a non-serialized form. This is
@@ -109,9 +109,9 @@
    * @return serialized form of these credentials
    */
   public final String serialize() {
-    return (getPrincipal() == null ? "-" : Base64.encodeBase64String(getPrincipal().getBytes(UTF_8))) + ":"
-        + (getToken() == null ? "-" : Base64.encodeBase64String(getToken().getClass().getName().getBytes(UTF_8))) + ":"
-        + (getToken() == null ? "-" : Base64.encodeBase64String(AuthenticationTokenSerializer.serialize(getToken())));
+    return (getPrincipal() == null ? "-" : Base64.getEncoder().encodeToString(getPrincipal().getBytes(UTF_8))) + ":"
+        + (getToken() == null ? "-" : Base64.getEncoder().encodeToString(getToken().getClass().getName().getBytes(UTF_8))) + ":"
+        + (getToken() == null ? "-" : Base64.getEncoder().encodeToString(AuthenticationTokenSerializer.serialize(getToken())));
   }
 
   /**
@@ -123,11 +123,11 @@
    */
   public static final Credentials deserialize(String serializedForm) {
     String[] split = serializedForm.split(":", 3);
-    String principal = split[0].equals("-") ? null : new String(Base64.decodeBase64(split[0]), UTF_8);
-    String tokenType = split[1].equals("-") ? null : new String(Base64.decodeBase64(split[1]), UTF_8);
+    String principal = split[0].equals("-") ? null : new String(Base64.getDecoder().decode(split[0]), UTF_8);
+    String tokenType = split[1].equals("-") ? null : new String(Base64.getDecoder().decode(split[1]), UTF_8);
     AuthenticationToken token = null;
     if (!split[2].equals("-")) {
-      byte[] tokenBytes = Base64.decodeBase64(split[2]);
+      byte[] tokenBytes = Base64.getDecoder().decode(split[2]);
       token = AuthenticationTokenSerializer.deserialize(tokenType, tokenBytes);
     }
     return new Credentials(principal, token);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/Namespaces.java b/core/src/main/java/org/apache/accumulo/core/client/impl/Namespaces.java
index 39d5822..51c7b2d 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/Namespaces.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/Namespaces.java
@@ -37,7 +37,7 @@
   public static final String VALID_NAME_REGEX = "^\\w*$";
   public static final Validator<String> VALID_NAME = new Validator<String>() {
     @Override
-    public boolean apply(String namespace) {
+    public boolean test(String namespace) {
       return namespace != null && namespace.matches(VALID_NAME_REGEX);
     }
 
@@ -51,7 +51,7 @@
 
   public static final Validator<String> NOT_DEFAULT = new Validator<String>() {
     @Override
-    public boolean apply(String namespace) {
+    public boolean test(String namespace) {
       return !Namespaces.DEFAULT_NAMESPACE.equals(namespace);
     }
 
@@ -63,7 +63,7 @@
 
   public static final Validator<String> NOT_ACCUMULO = new Validator<String>() {
     @Override
-    public boolean apply(String namespace) {
+    public boolean test(String namespace) {
       return !Namespaces.ACCUMULO_NAMESPACE.equals(namespace);
     }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/OfflineScanner.java b/core/src/main/java/org/apache/accumulo/core/client/impl/OfflineScanner.java
index 427a7cc..176096a 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/OfflineScanner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/OfflineScanner.java
@@ -33,7 +33,6 @@
 public class OfflineScanner extends ScannerOptions implements Scanner {
 
   private int batchSize;
-  private int timeOut;
   private Range range;
 
   private Instance instance;
@@ -57,18 +56,6 @@
     this.timeOut = Integer.MAX_VALUE;
   }
 
-  @Deprecated
-  @Override
-  public void setTimeOut(int timeOut) {
-    this.timeOut = timeOut;
-  }
-
-  @Deprecated
-  @Override
-  public int getTimeOut() {
-    return timeOut;
-  }
-
   @Override
   public void setRange(Range range) {
     this.range = range;
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerImpl.java
index 89406f4..b0a0fa3 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerImpl.java
@@ -92,7 +92,7 @@
 
   @Override
   public synchronized Iterator<Entry<Key,Value>> iterator() {
-    return new ScannerIterator(context, tableId, authorizations, range, size, getTimeOut(), this, isolated, readaheadThreshold);
+    return new ScannerIterator(context, tableId, authorizations, range, size, getTimeout(TimeUnit.SECONDS), this, isolated, readaheadThreshold);
   }
 
   @Override
@@ -110,24 +110,6 @@
     this.isolated = false;
   }
 
-  @Deprecated
-  @Override
-  public void setTimeOut(int timeOut) {
-    if (timeOut == Integer.MAX_VALUE)
-      setTimeout(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
-    else
-      setTimeout(timeOut, TimeUnit.SECONDS);
-  }
-
-  @Deprecated
-  @Override
-  public int getTimeOut() {
-    long timeout = getTimeout(TimeUnit.SECONDS);
-    if (timeout >= Integer.MAX_VALUE)
-      return Integer.MAX_VALUE;
-    return (int) timeout;
-  }
-
   @Override
   public synchronized void setReadaheadThreshold(long batches) {
     if (0 > batches) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerIterator.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerIterator.java
index ae55cc0..7d01895 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerIterator.java
@@ -48,7 +48,7 @@
   private static final Logger log = LoggerFactory.getLogger(ScannerIterator.class);
 
   // scanner options
-  private int timeOut;
+  private long timeOut;
 
   // scanner state
   private Iterator<KeyValue> iter;
@@ -104,7 +104,7 @@
 
   }
 
-  ScannerIterator(ClientContext context, String tableId, Authorizations authorizations, Range range, int size, int timeOut, ScannerOptions options,
+  ScannerIterator(ClientContext context, String tableId, Authorizations authorizations, Range range, int size, long timeOut, ScannerOptions options,
       boolean isolated, long readaheadThreshold) {
     this.timeOut = timeOut;
     this.readaheadThreshold = readaheadThreshold;
@@ -133,7 +133,6 @@
   }
 
   @Override
-  @SuppressWarnings("unchecked")
   public boolean hasNext() {
     if (finished)
       return false;
@@ -160,6 +159,7 @@
           throw new RuntimeException((Exception) obj);
       }
 
+      @SuppressWarnings("unchecked")
       List<KeyValue> currentBatch = (List<KeyValue>) obj;
 
       if (currentBatch.size() == 0) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/SecurityOperationsImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/SecurityOperationsImpl.java
index 73f17a7..250254a 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/SecurityOperationsImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/SecurityOperationsImpl.java
@@ -93,13 +93,6 @@
     this.context = context;
   }
 
-  @Deprecated
-  @Override
-  public void createUser(String user, byte[] password, final Authorizations authorizations) throws AccumuloException, AccumuloSecurityException {
-    createLocalUser(user, new PasswordToken(password));
-    changeUserAuthorizations(user, authorizations);
-  }
-
   @Override
   public void createLocalUser(final String principal, final PasswordToken password) throws AccumuloException, AccumuloSecurityException {
     checkArgument(principal != null, "principal is null");
@@ -118,12 +111,6 @@
     });
   }
 
-  @Deprecated
-  @Override
-  public void dropUser(final String user) throws AccumuloException, AccumuloSecurityException {
-    dropLocalUser(user);
-  }
-
   @Override
   public void dropLocalUser(final String principal) throws AccumuloException, AccumuloSecurityException {
     checkArgument(principal != null, "principal is null");
@@ -135,12 +122,6 @@
     });
   }
 
-  @Deprecated
-  @Override
-  public boolean authenticateUser(String user, byte[] password) throws AccumuloException, AccumuloSecurityException {
-    return authenticateUser(user, new PasswordToken(password));
-  }
-
   @Override
   public boolean authenticateUser(final String principal, final AuthenticationToken token) throws AccumuloException, AccumuloSecurityException {
     checkArgument(principal != null, "principal is null");
@@ -155,12 +136,6 @@
   }
 
   @Override
-  @Deprecated
-  public void changeUserPassword(String user, byte[] password) throws AccumuloException, AccumuloSecurityException {
-    changeLocalUserPassword(user, new PasswordToken(password));
-  }
-
-  @Override
   public void changeLocalUserPassword(final String principal, final PasswordToken token) throws AccumuloException, AccumuloSecurityException {
     checkArgument(principal != null, "principal is null");
     checkArgument(token != null, "token is null");
@@ -339,12 +314,6 @@
     });
   }
 
-  @Deprecated
-  @Override
-  public Set<String> listUsers() throws AccumuloException, AccumuloSecurityException {
-    return listLocalUsers();
-  }
-
   @Override
   public Set<String> listLocalUsers() throws AccumuloException, AccumuloSecurityException {
     return execute(new ClientExecReturn<Set<String>,ClientService.Client>() {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
index 3d17a85..931c7f7 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
@@ -70,7 +70,6 @@
 import org.apache.accumulo.core.client.admin.Locations;
 import org.apache.accumulo.core.client.admin.NewTableConfiguration;
 import org.apache.accumulo.core.client.admin.TableOperations;
-import org.apache.accumulo.core.client.admin.TimeType;
 import org.apache.accumulo.core.client.impl.TabletLocator.TabletLocation;
 import org.apache.accumulo.core.client.impl.thrift.ClientService;
 import org.apache.accumulo.core.client.impl.thrift.ClientService.Client;
@@ -188,26 +187,6 @@
   }
 
   @Override
-  @Deprecated
-  public void create(String tableName, boolean limitVersion) throws AccumuloException, AccumuloSecurityException, TableExistsException {
-    create(tableName, limitVersion, TimeType.MILLIS);
-  }
-
-  @Override
-  @Deprecated
-  public void create(String tableName, boolean limitVersion, TimeType timeType) throws AccumuloException, AccumuloSecurityException, TableExistsException {
-    checkArgument(tableName != null, "tableName is null");
-    checkArgument(timeType != null, "timeType is null");
-
-    NewTableConfiguration ntc = new NewTableConfiguration().setTimeType(timeType);
-
-    if (limitVersion)
-      create(tableName, ntc);
-    else
-      create(tableName, ntc.withoutDefaultIterators());
-  }
-
-  @Override
   public void create(String tableName, NewTableConfiguration ntc) throws AccumuloException, AccumuloSecurityException, TableExistsException {
     checkArgument(tableName != null, "tableName is null");
     checkArgument(ntc != null, "ntc is null");
@@ -594,16 +573,6 @@
     return endRows;
   }
 
-  @Deprecated
-  @Override
-  public Collection<Text> getSplits(String tableName) throws TableNotFoundException {
-    try {
-      return listSplits(tableName);
-    } catch (AccumuloSecurityException e) {
-      throw new RuntimeException(e);
-    }
-  }
-
   @Override
   public Collection<Text> listSplits(String tableName, int maxSplits) throws TableNotFoundException, AccumuloSecurityException {
     Collection<Text> endRows = listSplits(tableName);
@@ -629,16 +598,6 @@
     return subset;
   }
 
-  @Deprecated
-  @Override
-  public Collection<Text> getSplits(String tableName, int maxSplits) throws TableNotFoundException {
-    try {
-      return listSplits(tableName, maxSplits);
-    } catch (AccumuloSecurityException e) {
-      throw new RuntimeException(e);
-    }
-  }
-
   @Override
   public void delete(String tableName) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
     checkArgument(tableName != null, "tableName is null");
@@ -698,16 +657,6 @@
   }
 
   @Override
-  @Deprecated
-  public void flush(String tableName) throws AccumuloException, AccumuloSecurityException {
-    try {
-      flush(tableName, null, null, false);
-    } catch (TableNotFoundException e) {
-      throw new AccumuloException(e.getMessage(), e);
-    }
-  }
-
-  @Override
   public void flush(String tableName, Text start, Text end, boolean wait) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
     checkArgument(tableName != null, "tableName is null");
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftScanner.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftScanner.java
index 91b2637..6e07b30 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftScanner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftScanner.java
@@ -213,7 +213,7 @@
     return (long) (Math.max(millis * 2, 3000) * (.9 + Math.random() / 5));
   }
 
-  public static List<KeyValue> scan(ClientContext context, ScanState scanState, int timeOut) throws ScanTimedOutException, AccumuloException,
+  public static List<KeyValue> scan(ClientContext context, ScanState scanState, long timeOut) throws ScanTimedOutException, AccumuloException,
       AccumuloSecurityException, TableNotFoundException {
     TabletLocation loc = null;
     Instance instance = context.getInstance();
@@ -415,7 +415,7 @@
 
       if (scanState.scanID == null) {
         String msg = "Starting scan tserver=" + loc.tablet_location + " tablet=" + loc.tablet_extent + " range=" + scanState.range + " ssil="
-            + scanState.serverSideIteratorList + " ssio=" + scanState.serverSideIteratorOptions;
+            + scanState.serverSideIteratorList + " ssio=" + scanState.serverSideIteratorOptions + " context=" + scanState.classLoaderContext;
         Thread.currentThread().setName(msg);
 
         if (log.isTraceEnabled()) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportPool.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportPool.java
index 682ecbd..d9bd4e8 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportPool.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportPool.java
@@ -433,7 +433,7 @@
                 cachedConnection.setReserved(true);
                 final String serverAddr = ttk.getServer().toString();
                 log.trace("Using existing connection to {}", serverAddr);
-                return new Pair<String,TTransport>(serverAddr, cachedConnection.transport);
+                return new Pair<>(serverAddr, cachedConnection.transport);
               }
             }
           }
@@ -455,7 +455,7 @@
                 cachedConnection.setReserved(true);
                 final String serverAddr = ttk.getServer().toString();
                 log.trace("Using existing connection to {} timeout {}", serverAddr, ttk.getTimeout());
-                return new Pair<String,TTransport>(serverAddr, cachedConnection.transport);
+                return new Pair<>(serverAddr, cachedConnection.transport);
               }
             }
           }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.java
index 6165346..9ca686c 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.core.client.mapred;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 import java.io.IOException;
 import java.net.InetAddress;
 import java.util.ArrayList;
@@ -68,7 +70,6 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.master.state.tables.TableState;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.mapred.InputFormat;
@@ -79,8 +80,6 @@
 import org.apache.log4j.Level;
 import org.apache.log4j.Logger;
 
-import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
-
 /**
  * An abstract input format to provide shared methods common to all other input format classes. At the very least, any classes inheriting from this class will
  * need to define their own {@link RecordReader}.
@@ -222,23 +221,6 @@
    *
    * @param job
    *          the Hadoop job instance to be configured
-   * @param instanceName
-   *          the Accumulo instance name
-   * @param zooKeepers
-   *          a comma-separated list of zookeeper servers
-   * @since 1.5.0
-   * @deprecated since 1.6.0; Use {@link #setZooKeeperInstance(JobConf, ClientConfiguration)} instead.
-   */
-  @Deprecated
-  public static void setZooKeeperInstance(JobConf job, String instanceName, String zooKeepers) {
-    setZooKeeperInstance(job, new ClientConfiguration().withInstance(instanceName).withZkHosts(zooKeepers));
-  }
-
-  /**
-   * Configures a {@link org.apache.accumulo.core.client.ZooKeeperInstance} for this job.
-   *
-   * @param job
-   *          the Hadoop job instance to be configured
    * @param clientConfig
    *          client configuration containing connection options
    * @since 1.6.0
@@ -248,21 +230,6 @@
   }
 
   /**
-   * Configures a {@link org.apache.accumulo.core.client.mock.MockInstance} for this job.
-   *
-   * @param job
-   *          the Hadoop job instance to be configured
-   * @param instanceName
-   *          the Accumulo instance name
-   * @since 1.5.0
-   * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework
-   */
-  @Deprecated
-  public static void setMockInstance(JobConf job, String instanceName) {
-    InputConfigurator.setMockInstance(CLASS, job, instanceName);
-  }
-
-  /**
    * Initializes an Accumulo {@link org.apache.accumulo.core.client.Instance} based on the configuration.
    *
    * @param job
@@ -328,23 +295,6 @@
   }
 
   /**
-   * Initializes an Accumulo {@link org.apache.accumulo.core.client.impl.TabletLocator} based on the configuration.
-   *
-   * @param job
-   *          the Hadoop context for the configured job
-   * @return an Accumulo tablet locator
-   * @throws org.apache.accumulo.core.client.TableNotFoundException
-   *           if the table name set on the configuration doesn't exist
-   * @since 1.6.0
-   * @deprecated since 1.7.0 This method returns a type that is not part of the public API and is not guaranteed to be stable. The method was deprecated to
-   *             discourage its use.
-   */
-  @Deprecated
-  protected static TabletLocator getTabletLocator(JobConf job, String tableId) throws TableNotFoundException {
-    return InputConfigurator.getTabletLocator(CLASS, job, tableId);
-  }
-
-  /**
    * Fetch the client configuration from the job.
    *
    * @param job
@@ -467,23 +417,6 @@
     }
 
     /**
-     * Configures the iterators on a scanner for the given table name.
-     *
-     * @param job
-     *          the Hadoop job configuration
-     * @param scanner
-     *          the scanner for which to configure the iterators
-     * @param tableName
-     *          the table name for which the scanner is configured
-     * @since 1.6.0
-     * @deprecated since 1.7.0; Use {@link #jobIterators} instead.
-     */
-    @Deprecated
-    protected void setupIterators(JobConf job, Scanner scanner, String tableName, RangeInputSplit split) {
-      setupIterators(job, (ScannerBase) scanner, tableName, split);
-    }
-
-    /**
      * Initialize a scanner over the given input split using this task attempt configuration.
      */
     public void initialize(InputSplit inSplit, JobConf job) throws IOException {
@@ -561,8 +494,6 @@
         try {
           if (isOffline) {
             scanner = new OfflineScanner(instance, new Credentials(principal, token), baseSplit.getTableId(), authorizations);
-          } else if (DeprecationUtil.isMockInstance(instance)) {
-            scanner = instance.getConnector(principal, token).createScanner(baseSplit.getTableName(), authorizations);
           } else {
             ClientConfiguration clientConf = getClientConfiguration(job);
             ClientContext context = new ClientContext(instance, new Credentials(principal, token), clientConf);
@@ -671,14 +602,10 @@
       Instance instance = getInstance(job);
       String tableId;
       // resolve table name to id once, and use id from this point forward
-      if (DeprecationUtil.isMockInstance(instance)) {
-        tableId = "";
-      } else {
-        try {
-          tableId = Tables.getTableId(instance, tableName);
-        } catch (TableNotFoundException e) {
-          throw new IOException(e);
-        }
+      try {
+        tableId = Tables.getTableId(instance, tableName);
+      } catch (TableNotFoundException e) {
+        throw new IOException(e);
       }
 
       Authorizations auths = getScanAuthorizations(job);
@@ -720,12 +647,10 @@
           ClientContext context = new ClientContext(getInstance(job), new Credentials(getPrincipal(job), getAuthenticationToken(job)),
               getClientConfiguration(job));
           while (!tl.binRanges(context, ranges, binnedRanges).isEmpty()) {
-            if (!DeprecationUtil.isMockInstance(instance)) {
-              if (!Tables.exists(instance, tableId))
-                throw new TableDeletedException(tableId);
-              if (Tables.getTableState(instance, tableId) == TableState.OFFLINE)
-                throw new TableOfflineException(instance, tableId);
-            }
+            if (!Tables.exists(instance, tableId))
+              throw new TableDeletedException(tableId);
+            if (Tables.getTableState(instance, tableId) == TableState.OFFLINE)
+              throw new TableOfflineException(instance, tableId);
             binnedRanges.clear();
             log.warn("Unable to locate bins for specified ranges. Retrying.");
             // sleep randomly between 100 and 200 ms
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
index f2bc4cd..640a85d 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
@@ -53,21 +53,6 @@
   protected static final Logger log = Logger.getLogger(CLASS);
 
   /**
-   * This helper method provides an AccumuloConfiguration object constructed from the Accumulo defaults, and overridden with Accumulo properties that have been
-   * stored in the Job's configuration.
-   *
-   * @param job
-   *          the Hadoop context for the configured job
-   * @since 1.5.0
-   * @deprecated since 1.7.0 This method returns a type that is not part of the public API and is not guaranteed to be stable. The method was deprecated to
-   *             discourage its use.
-   */
-  @Deprecated
-  protected static AccumuloConfiguration getAccumuloConfiguration(JobConf job) {
-    return FileOutputConfigurator.getAccumuloConfiguration(CLASS, job);
-  }
-
-  /**
    * Sets the compression type to use for data blocks. Specifying a compression may require additional libraries to be available to your Job.
    *
    * @param job
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormat.java
index 5feadb8..9ac459e 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormat.java
@@ -42,7 +42,6 @@
 import org.apache.accumulo.core.client.mapreduce.lib.impl.OutputConfigurator;
 import org.apache.accumulo.core.client.security.SecurityErrorCode;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
-import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.AuthenticationTokenSerializer;
 import org.apache.accumulo.core.client.security.tokens.DelegationToken;
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
@@ -169,28 +168,6 @@
   }
 
   /**
-   * Gets the serialized token class from either the configuration or the token file.
-   *
-   * @since 1.5.0
-   * @deprecated since 1.6.0; Use {@link #getAuthenticationToken(JobConf)} instead.
-   */
-  @Deprecated
-  protected static String getTokenClass(JobConf job) {
-    return getAuthenticationToken(job).getClass().getName();
-  }
-
-  /**
-   * Gets the serialized token from either the configuration or the token file.
-   *
-   * @since 1.5.0
-   * @deprecated since 1.6.0; Use {@link #getAuthenticationToken(JobConf)} instead.
-   */
-  @Deprecated
-  protected static byte[] getToken(JobConf job) {
-    return AuthenticationTokenSerializer.serialize(getAuthenticationToken(job));
-  }
-
-  /**
    * Gets the authenticated token from either the specified token file or directly from the configuration, whichever was used when the job was configured.
    *
    * @param job
@@ -210,24 +187,6 @@
    *
    * @param job
    *          the Hadoop job instance to be configured
-   * @param instanceName
-   *          the Accumulo instance name
-   * @param zooKeepers
-   *          a comma-separated list of zookeeper servers
-   * @since 1.5.0
-   * @deprecated since 1.6.0; Use {@link #setZooKeeperInstance(JobConf, ClientConfiguration)} instead.
-   */
-
-  @Deprecated
-  public static void setZooKeeperInstance(JobConf job, String instanceName, String zooKeepers) {
-    setZooKeeperInstance(job, new ClientConfiguration().withInstance(instanceName).withZkHosts(zooKeepers));
-  }
-
-  /**
-   * Configures a {@link ZooKeeperInstance} for this job.
-   *
-   * @param job
-   *          the Hadoop job instance to be configured
    *
    * @param clientConfig
    *          client configuration for specifying connection timeouts, SSL connection options, etc.
@@ -238,21 +197,6 @@
   }
 
   /**
-   * Configures a {@link org.apache.accumulo.core.client.mock.MockInstance} for this job.
-   *
-   * @param job
-   *          the Hadoop job instance to be configured
-   * @param instanceName
-   *          the Accumulo instance name
-   * @since 1.5.0
-   * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework
-   */
-  @Deprecated
-  public static void setMockInstance(JobConf job, String instanceName) {
-    OutputConfigurator.setMockInstance(CLASS, job, instanceName);
-  }
-
-  /**
    * Initializes an Accumulo {@link Instance} based on the configuration.
    *
    * @param job
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapred/InputFormatBase.java b/core/src/main/java/org/apache/accumulo/core/client/mapred/InputFormatBase.java
index 0cf57d2..2523819 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapred/InputFormatBase.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapred/InputFormatBase.java
@@ -24,10 +24,7 @@
 import org.apache.accumulo.core.client.ClientSideIteratorScanner;
 import org.apache.accumulo.core.client.IsolatedScanner;
 import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.ScannerBase;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.impl.TabletLocator;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator;
 import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Key;
@@ -355,22 +352,6 @@
     InputConfigurator.setSamplerConfiguration(CLASS, job, samplerConfig);
   }
 
-  /**
-   * Initializes an Accumulo {@link org.apache.accumulo.core.client.impl.TabletLocator} based on the configuration.
-   *
-   * @param job
-   *          the Hadoop job for the configured job
-   * @return an Accumulo tablet locator
-   * @throws org.apache.accumulo.core.client.TableNotFoundException
-   *           if the table name set on the job doesn't exist
-   * @since 1.5.0
-   * @deprecated since 1.6.0
-   */
-  @Deprecated
-  protected static TabletLocator getTabletLocator(JobConf job) throws TableNotFoundException {
-    return InputConfigurator.getTabletLocator(CLASS, job, InputConfigurator.getInputTableName(CLASS, job));
-  }
-
   protected abstract static class RecordReaderBase<K,V> extends AbstractRecordReader<K,V> {
 
     @Override
@@ -378,56 +359,6 @@
       return getIterators(job);
     }
 
-    /**
-     * Apply the configured iterators to the scanner.
-     *
-     * @param iterators
-     *          the iterators to set
-     * @param scanner
-     *          the scanner to configure
-     * @deprecated since 1.7.0; Use {@link #jobIterators} instead.
-     */
-    @Deprecated
-    protected void setupIterators(List<IteratorSetting> iterators, Scanner scanner) {
-      for (IteratorSetting iterator : iterators) {
-        scanner.addScanIterator(iterator);
-      }
-    }
-
-    /**
-     * Apply the configured iterators from the configuration to the scanner.
-     *
-     * @param job
-     *          the job configuration
-     * @param scanner
-     *          the scanner to configure
-     */
-    @Deprecated
-    protected void setupIterators(JobConf job, Scanner scanner) {
-      setupIterators(getIterators(job), scanner);
-    }
   }
 
-  /**
-   * @deprecated since 1.5.2; Use {@link org.apache.accumulo.core.client.mapred.RangeInputSplit} instead.
-   * @see org.apache.accumulo.core.client.mapred.RangeInputSplit
-   */
-  @Deprecated
-  public static class RangeInputSplit extends org.apache.accumulo.core.client.mapred.RangeInputSplit {
-    public RangeInputSplit() {
-      super();
-    }
-
-    public RangeInputSplit(RangeInputSplit other) throws IOException {
-      super(other);
-    }
-
-    public RangeInputSplit(String table, String tableId, Range range, String[] locations) {
-      super(table, tableId, range, locations);
-    }
-
-    protected RangeInputSplit(String table, Range range, String[] locations) {
-      super(table, "", range, locations);
-    }
-  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AbstractInputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AbstractInputFormat.java
index 9ccf78a..e0da0f8 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AbstractInputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AbstractInputFormat.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.core.client.mapreduce;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 import java.io.IOException;
 import java.net.InetAddress;
 import java.util.ArrayList;
@@ -67,7 +69,6 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.master.state.tables.TableState;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
@@ -81,8 +82,6 @@
 import org.apache.log4j.Level;
 import org.apache.log4j.Logger;
 
-import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
-
 /**
  * An abstract input format to provide shared methods common to all other input format classes. At the very least, any classes inheriting from this class will
  * need to define their own {@link RecordReader}.
@@ -206,28 +205,6 @@
   }
 
   /**
-   * Gets the serialized token class from either the configuration or the token file.
-   *
-   * @since 1.5.0
-   * @deprecated since 1.6.0; Use {@link #getAuthenticationToken(JobContext)} instead.
-   */
-  @Deprecated
-  protected static String getTokenClass(JobContext context) {
-    return getAuthenticationToken(context).getClass().getName();
-  }
-
-  /**
-   * Gets the serialized token from either the configuration or the token file.
-   *
-   * @since 1.5.0
-   * @deprecated since 1.6.0; Use {@link #getAuthenticationToken(JobContext)} instead.
-   */
-  @Deprecated
-  protected static byte[] getToken(JobContext context) {
-    return AuthenticationToken.AuthenticationTokenSerializer.serialize(getAuthenticationToken(context));
-  }
-
-  /**
    * Gets the authenticated token from either the specified token file or directly from the configuration, whichever was used when the job was configured.
    *
    * @param context
@@ -247,23 +224,6 @@
    *
    * @param job
    *          the Hadoop job instance to be configured
-   * @param instanceName
-   *          the Accumulo instance name
-   * @param zooKeepers
-   *          a comma-separated list of zookeeper servers
-   * @since 1.5.0
-   * @deprecated since 1.6.0; Use {@link #setZooKeeperInstance(Job, ClientConfiguration)} instead.
-   */
-  @Deprecated
-  public static void setZooKeeperInstance(Job job, String instanceName, String zooKeepers) {
-    setZooKeeperInstance(job, new ClientConfiguration().withInstance(instanceName).withZkHosts(zooKeepers));
-  }
-
-  /**
-   * Configures a {@link org.apache.accumulo.core.client.ZooKeeperInstance} for this job.
-   *
-   * @param job
-   *          the Hadoop job instance to be configured
    *
    * @param clientConfig
    *          client configuration containing connection options
@@ -274,21 +234,6 @@
   }
 
   /**
-   * Configures a {@link org.apache.accumulo.core.client.mock.MockInstance} for this job.
-   *
-   * @param job
-   *          the Hadoop job instance to be configured
-   * @param instanceName
-   *          the Accumulo instance name
-   * @since 1.5.0
-   * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework
-   */
-  @Deprecated
-  public static void setMockInstance(Job job, String instanceName) {
-    InputConfigurator.setMockInstance(CLASS, job.getConfiguration(), instanceName);
-  }
-
-  /**
    * Initializes an Accumulo {@link org.apache.accumulo.core.client.Instance} based on the configuration.
    *
    * @param context
@@ -381,25 +326,6 @@
     return InputConfigurator.getInputTableConfig(CLASS, context.getConfiguration(), tableName);
   }
 
-  /**
-   * Initializes an Accumulo {@link org.apache.accumulo.core.client.impl.TabletLocator} based on the configuration.
-   *
-   * @param context
-   *          the Hadoop context for the configured job
-   * @param table
-   *          the table for which to initialize the locator
-   * @return an Accumulo tablet locator
-   * @throws org.apache.accumulo.core.client.TableNotFoundException
-   *           if the table name set on the configuration doesn't exist
-   * @since 1.6.0
-   * @deprecated since 1.7.0 This method returns a type that is not part of the public API and is not guaranteed to be stable. The method was deprecated to
-   *             discourage its use.
-   */
-  @Deprecated
-  protected static TabletLocator getTabletLocator(JobContext context, String table) throws TableNotFoundException {
-    return InputConfigurator.getTabletLocator(CLASS, context.getConfiguration(), table);
-  }
-
   // InputFormat doesn't have the equivalent of OutputFormat's checkOutputSpecs(JobContext job)
   /**
    * Check whether a configuration is fully configured to be used with an Accumulo {@link org.apache.hadoop.mapreduce.InputFormat}.
@@ -498,23 +424,6 @@
         scanner.addScanIterator(iterator);
     }
 
-    /**
-     * Configures the iterators on a scanner for the given table name.
-     *
-     * @param context
-     *          the Hadoop context for the configured job
-     * @param scanner
-     *          the scanner for which to configure the iterators
-     * @param tableName
-     *          the table name for which the scanner is configured
-     * @since 1.6.0
-     * @deprecated since 1.7.0; Use {@link #contextIterators} instead.
-     */
-    @Deprecated
-    protected void setupIterators(TaskAttemptContext context, Scanner scanner, String tableName, RangeInputSplit split) {
-      setupIterators(context, (ScannerBase) scanner, tableName, split);
-    }
-
     @Override
     public void initialize(InputSplit inSplit, TaskAttemptContext attempt) throws IOException {
 
@@ -591,8 +500,6 @@
         try {
           if (isOffline) {
             scanner = new OfflineScanner(instance, new Credentials(principal, token), split.getTableId(), authorizations);
-          } else if (DeprecationUtil.isMockInstance(instance)) {
-            scanner = instance.getConnector(principal, token).createScanner(split.getTableName(), authorizations);
           } else {
             ClientConfiguration clientConf = getClientConfiguration(attempt);
             ClientContext context = new ClientContext(instance, new Credentials(principal, token), clientConf);
@@ -718,14 +625,10 @@
       Instance instance = getInstance(context);
       String tableId;
       // resolve table name to id once, and use id from this point forward
-      if (DeprecationUtil.isMockInstance(instance)) {
-        tableId = "";
-      } else {
-        try {
-          tableId = Tables.getTableId(instance, tableName);
-        } catch (TableNotFoundException e) {
-          throw new IOException(e);
-        }
+      try {
+        tableId = Tables.getTableId(instance, tableName);
+      } catch (TableNotFoundException e) {
+        throw new IOException(e);
       }
 
       Authorizations auths = getScanAuthorizations(context);
@@ -768,12 +671,10 @@
           ClientContext clientContext = new ClientContext(getInstance(context), new Credentials(getPrincipal(context), getAuthenticationToken(context)),
               getClientConfiguration(context));
           while (!tl.binRanges(clientContext, ranges, binnedRanges).isEmpty()) {
-            if (!DeprecationUtil.isMockInstance(instance)) {
-              if (!Tables.exists(instance, tableId))
-                throw new TableDeletedException(tableId);
-              if (Tables.getTableState(instance, tableId) == TableState.OFFLINE)
-                throw new TableOfflineException(instance, tableId);
-            }
+            if (!Tables.exists(instance, tableId))
+              throw new TableDeletedException(tableId);
+            if (Tables.getTableState(instance, tableId) == TableState.OFFLINE)
+              throw new TableOfflineException(instance, tableId);
             binnedRanges.clear();
             log.warn("Unable to locate bins for specified ranges. Retrying.");
             // sleep randomly between 100 and 200 ms
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
index 75afe2b..656dba7 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
@@ -30,7 +30,6 @@
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.JobContext;
 import org.apache.hadoop.mapreduce.RecordWriter;
 import org.apache.hadoop.mapreduce.TaskAttemptContext;
 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
@@ -52,21 +51,6 @@
   protected static final Logger log = Logger.getLogger(CLASS);
 
   /**
-   * This helper method provides an AccumuloConfiguration object constructed from the Accumulo defaults, and overridden with Accumulo properties that have been
-   * stored in the Job's configuration.
-   *
-   * @param context
-   *          the Hadoop context for the configured job
-   * @since 1.5.0
-   * @deprecated since 1.7.0 This method returns a type that is not part of the public API and is not guaranteed to be stable. The method was deprecated to
-   *             discourage its use.
-   */
-  @Deprecated
-  protected static AccumuloConfiguration getAccumuloConfiguration(JobContext context) {
-    return FileOutputConfigurator.getAccumuloConfiguration(CLASS, context.getConfiguration());
-  }
-
-  /**
    * Sets the compression type to use for data blocks. Specifying a compression may require additional libraries to be available to your Job.
    *
    * @param job
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java
index 1e06ca3..9969b30 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java
@@ -42,7 +42,6 @@
 import org.apache.accumulo.core.client.mapreduce.lib.impl.OutputConfigurator;
 import org.apache.accumulo.core.client.security.SecurityErrorCode;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
-import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.AuthenticationTokenSerializer;
 import org.apache.accumulo.core.client.security.tokens.DelegationToken;
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
@@ -170,28 +169,6 @@
   }
 
   /**
-   * Gets the serialized token class from either the configuration or the token file.
-   *
-   * @since 1.5.0
-   * @deprecated since 1.6.0; Use {@link #getAuthenticationToken(JobContext)} instead.
-   */
-  @Deprecated
-  protected static String getTokenClass(JobContext context) {
-    return getAuthenticationToken(context).getClass().getName();
-  }
-
-  /**
-   * Gets the serialized token from either the configuration or the token file.
-   *
-   * @since 1.5.0
-   * @deprecated since 1.6.0; Use {@link #getAuthenticationToken(JobContext)} instead.
-   */
-  @Deprecated
-  protected static byte[] getToken(JobContext context) {
-    return AuthenticationTokenSerializer.serialize(getAuthenticationToken(context));
-  }
-
-  /**
    * Gets the authenticated token from either the specified token file or directly from the configuration, whichever was used when the job was configured.
    *
    * @param context
@@ -211,23 +188,6 @@
    *
    * @param job
    *          the Hadoop job instance to be configured
-   * @param instanceName
-   *          the Accumulo instance name
-   * @param zooKeepers
-   *          a comma-separated list of zookeeper servers
-   * @since 1.5.0
-   * @deprecated since 1.6.0; Use {@link #setZooKeeperInstance(Job, ClientConfiguration)} instead.
-   */
-  @Deprecated
-  public static void setZooKeeperInstance(Job job, String instanceName, String zooKeepers) {
-    setZooKeeperInstance(job, new ClientConfiguration().withInstance(instanceName).withZkHosts(zooKeepers));
-  }
-
-  /**
-   * Configures a {@link ZooKeeperInstance} for this job.
-   *
-   * @param job
-   *          the Hadoop job instance to be configured
    *
    * @param clientConfig
    *          client configuration for specifying connection timeouts, SSL connection options, etc.
@@ -238,20 +198,6 @@
   }
 
   /**
-   * Configures a {@link org.apache.accumulo.core.client.mock.MockInstance} for this job.
-   *
-   * @param job
-   *          the Hadoop job instance to be configured
-   * @param instanceName
-   *          the Accumulo instance name
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setMockInstance(Job job, String instanceName) {
-    OutputConfigurator.setMockInstance(CLASS, job.getConfiguration(), instanceName);
-  }
-
-  /**
    * Initializes an Accumulo {@link Instance} based on the configuration.
    *
    * @param context
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputFormatBase.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputFormatBase.java
index 324d5c7..c6ae5a2 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputFormatBase.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputFormatBase.java
@@ -24,10 +24,7 @@
 import org.apache.accumulo.core.client.ClientSideIteratorScanner;
 import org.apache.accumulo.core.client.IsolatedScanner;
 import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.ScannerBase;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.impl.TabletLocator;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator;
 import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Key;
@@ -354,22 +351,6 @@
     InputConfigurator.setSamplerConfiguration(CLASS, job.getConfiguration(), samplerConfig);
   }
 
-  /**
-   * Initializes an Accumulo {@link org.apache.accumulo.core.client.impl.TabletLocator} based on the configuration.
-   *
-   * @param context
-   *          the Hadoop context for the configured job
-   * @return an Accumulo tablet locator
-   * @throws org.apache.accumulo.core.client.TableNotFoundException
-   *           if the table name set on the configuration doesn't exist
-   * @since 1.5.0
-   * @deprecated since 1.6.0
-   */
-  @Deprecated
-  protected static TabletLocator getTabletLocator(JobContext context) throws TableNotFoundException {
-    return InputConfigurator.getTabletLocator(CLASS, context.getConfiguration(), InputConfigurator.getInputTableName(CLASS, context.getConfiguration()));
-  }
-
   protected abstract static class RecordReaderBase<K,V> extends AbstractRecordReader<K,V> {
 
     @Override
@@ -377,53 +358,6 @@
       return getIterators(context);
     }
 
-    /**
-     * Apply the configured iterators from the configuration to the scanner.
-     *
-     * @param context
-     *          the Hadoop context for the configured job
-     * @param scanner
-     *          the scanner to configure
-     * @deprecated since 1.7.0; Use {@link #contextIterators} instead.
-     */
-    @Deprecated
-    protected void setupIterators(TaskAttemptContext context, Scanner scanner) {
-      // tableName is given as null as it will be ignored in eventual call to #contextIterators
-      setupIterators(context, scanner, null, null);
-    }
-
-    /**
-     * Initialize a scanner over the given input split using this task attempt configuration.
-     *
-     * @deprecated since 1.7.0; Use {@link #contextIterators} instead.
-     */
-    @Deprecated
-    protected void setupIterators(TaskAttemptContext context, Scanner scanner, org.apache.accumulo.core.client.mapreduce.RangeInputSplit split) {
-      setupIterators(context, scanner, null, split);
-    }
   }
 
-  /**
-   * @deprecated since 1.5.2; Use {@link org.apache.accumulo.core.client.mapreduce.RangeInputSplit} instead.
-   * @see org.apache.accumulo.core.client.mapreduce.RangeInputSplit
-   */
-  @Deprecated
-  public static class RangeInputSplit extends org.apache.accumulo.core.client.mapreduce.RangeInputSplit {
-
-    public RangeInputSplit() {
-      super();
-    }
-
-    public RangeInputSplit(RangeInputSplit other) throws IOException {
-      super(other);
-    }
-
-    protected RangeInputSplit(String table, Range range, String[] locations) {
-      super(table, "", range, locations);
-    }
-
-    public RangeInputSplit(String table, String tableId, Range range, String[] locations) {
-      super(table, tableId, range, locations);
-    }
-  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputTableConfig.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputTableConfig.java
index a8724c2..b892ac1 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputTableConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputTableConfig.java
@@ -75,7 +75,7 @@
    * Returns the ranges to be queried in the configuration
    */
   public List<Range> getRanges() {
-    return ranges != null ? ranges : new ArrayList<Range>();
+    return ranges != null ? ranges : new ArrayList<>();
   }
 
   /**
@@ -95,7 +95,7 @@
    * Returns the columns to be fetched for this configuration
    */
   public Collection<Pair<Text,Text>> getFetchedColumns() {
-    return columns != null ? columns : new HashSet<Pair<Text,Text>>();
+    return columns != null ? columns : new HashSet<>();
   }
 
   /**
@@ -114,7 +114,7 @@
    * Returns the iterators to be set on this configuration
    */
   public List<IteratorSetting> getIterators() {
-    return iterators != null ? iterators : new ArrayList<IteratorSetting>();
+    return iterators != null ? iterators : new ArrayList<>();
   }
 
   /**
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
index 1e89500..081054d 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
@@ -23,6 +23,7 @@
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Arrays;
+import java.util.Base64;
 import java.util.Collection;
 import java.util.HashSet;
 import java.util.List;
@@ -44,8 +45,6 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.Base64;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.io.Writable;
@@ -62,7 +61,7 @@
   private TokenSource tokenSource;
   private String tokenFile;
   private AuthenticationToken token;
-  private Boolean offline, mockInstance, isolatedScan, localIterators;
+  private Boolean offline, isolatedScan, localIterators;
   private Authorizations auths;
   private Set<Pair<Text,Text>> fetchedColumns;
   private List<IteratorSetting> iterators;
@@ -155,10 +154,6 @@
     }
 
     if (in.readBoolean()) {
-      mockInstance = in.readBoolean();
-    }
-
-    if (in.readBoolean()) {
       int numColumns = in.readInt();
       List<String> columns = new ArrayList<>(numColumns);
       for (int i = 0; i < numColumns; i++) {
@@ -184,8 +179,7 @@
       switch (this.tokenSource) {
         case INLINE:
           String tokenClass = in.readUTF();
-          byte[] base64TokenBytes = in.readUTF().getBytes(UTF_8);
-          byte[] tokenBytes = Base64.decodeBase64(base64TokenBytes);
+          byte[] tokenBytes = Base64.getDecoder().decode(in.readUTF());
 
           this.token = AuthenticationTokenSerializer.deserialize(tokenClass, tokenBytes);
           break;
@@ -248,11 +242,6 @@
       out.writeBoolean(localIterators);
     }
 
-    out.writeBoolean(null != mockInstance);
-    if (null != mockInstance) {
-      out.writeBoolean(mockInstance);
-    }
-
     out.writeBoolean(null != fetchedColumns);
     if (null != fetchedColumns) {
       String[] cols = InputConfigurator.serializeColumns(fetchedColumns);
@@ -280,7 +269,7 @@
         throw new IOException("Cannot use both inline AuthenticationToken and file-based AuthenticationToken");
       } else if (null != token) {
         out.writeUTF(token.getClass().getName());
-        out.writeUTF(Base64.encodeBase64String(AuthenticationTokenSerializer.serialize(token)));
+        out.writeUTF(Base64.getEncoder().encodeToString(AuthenticationTokenSerializer.serialize(token)));
       } else {
         out.writeUTF(tokenFile);
       }
@@ -315,30 +304,10 @@
     }
   }
 
-  /**
-   * Use {@link #getTableName}
-   *
-   * @deprecated since 1.6.1, use getTableName() instead.
-   */
-  @Deprecated
-  public String getTable() {
-    return getTableName();
-  }
-
   public String getTableName() {
     return tableName;
   }
 
-  /**
-   * Use {@link #setTableName}
-   *
-   * @deprecated since 1.6.1, use setTableName() instead.
-   */
-  @Deprecated
-  public void setTable(String table) {
-    setTableName(table);
-  }
-
   public void setTableName(String table) {
     this.tableName = table;
   }
@@ -351,24 +320,11 @@
     return tableId;
   }
 
-  /**
-   * @see #getInstance(ClientConfiguration)
-   * @deprecated since 1.7.0, use getInstance(ClientConfiguration) instead.
-   */
-  @Deprecated
-  public Instance getInstance() {
-    return getInstance(ClientConfiguration.loadDefault());
-  }
-
   public Instance getInstance(ClientConfiguration base) {
     if (null == instanceName) {
       return null;
     }
 
-    if (isMockInstance()) {
-      return DeprecationUtil.makeMockInstance(getInstanceName());
-    }
-
     if (null == zooKeepers) {
       return null;
     }
@@ -426,22 +382,6 @@
     this.locations = Arrays.copyOf(locations, locations.length);
   }
 
-  /**
-   * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework
-   */
-  @Deprecated
-  public Boolean isMockInstance() {
-    return mockInstance;
-  }
-
-  /**
-   * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework
-   */
-  @Deprecated
-  public void setMockInstance(Boolean mockInstance) {
-    this.mockInstance = mockInstance;
-  }
-
   public Boolean isIsolatedScan() {
     return isolatedScan;
   }
@@ -516,7 +456,6 @@
     sb.append(" authenticationTokenFile: ").append(tokenFile);
     sb.append(" Authorizations: ").append(auths);
     sb.append(" offlineScan: ").append(offline);
-    sb.append(" mockInstance: ").append(mockInstance);
     sb.append(" isolatedScan: ").append(isolatedScan);
     sb.append(" localIterators: ").append(localIterators);
     sb.append(" fetchColumns: ").append(fetchedColumns);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/impl/SplitUtils.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/impl/SplitUtils.java
index b81b064..315e40c 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/impl/SplitUtils.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/impl/SplitUtils.java
@@ -27,7 +27,6 @@
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.hadoop.io.Text;
 import org.apache.log4j.Level;
 
@@ -41,7 +40,6 @@
       Authorizations auths, Level logLevel) {
     split.setInstanceName(instance.getInstanceName());
     split.setZooKeepers(instance.getZooKeepers());
-    DeprecationUtil.setMockInstance(split, DeprecationUtil.isMockInstance(instance));
 
     split.setPrincipal(principal);
     split.setToken(token);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
index 67fe2f4..2911c77 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
@@ -17,7 +17,6 @@
 package org.apache.accumulo.core.client.mapreduce.lib.impl;
 
 import static com.google.common.base.Preconditions.checkArgument;
-import static java.nio.charset.StandardCharsets.UTF_8;
 import static java.util.Objects.requireNonNull;
 
 import java.io.ByteArrayInputStream;
@@ -25,6 +24,7 @@
 import java.io.IOException;
 import java.net.URI;
 import java.net.URISyntaxException;
+import java.util.Base64;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -37,8 +37,6 @@
 import org.apache.accumulo.core.client.mapreduce.impl.DelegationTokenStub;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.AuthenticationTokenSerializer;
-import org.apache.accumulo.core.util.Base64;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FileSystem;
@@ -155,8 +153,8 @@
       conf.set(enumToConfKey(implementingClass, ConnectorInfo.TOKEN), TokenSource.JOB.prefix() + token.getClass().getName() + ":"
           + delToken.getServiceName().toString());
     } else {
-      conf.set(enumToConfKey(implementingClass, ConnectorInfo.TOKEN),
-          TokenSource.INLINE.prefix() + token.getClass().getName() + ":" + Base64.encodeBase64String(AuthenticationTokenSerializer.serialize(token)));
+      conf.set(enumToConfKey(implementingClass, ConnectorInfo.TOKEN), TokenSource.INLINE.prefix() + token.getClass().getName() + ":"
+          + Base64.getEncoder().encodeToString(AuthenticationTokenSerializer.serialize(token)));
     }
   }
 
@@ -244,7 +242,7 @@
     if (token.startsWith(TokenSource.INLINE.prefix())) {
       String[] args = token.substring(TokenSource.INLINE.prefix().length()).split(":", 2);
       if (args.length == 2)
-        return AuthenticationTokenSerializer.deserialize(args[0], Base64.decodeBase64(args[1].getBytes(UTF_8)));
+        return AuthenticationTokenSerializer.deserialize(args[0], Base64.getDecoder().decode(args[1]));
     } else if (token.startsWith(TokenSource.FILE.prefix())) {
       String tokenFileName = token.substring(TokenSource.FILE.prefix().length());
       return getTokenFromFile(conf, getPrincipal(implementingClass, conf), tokenFileName);
@@ -321,29 +319,6 @@
   }
 
   /**
-   * Configures a {@link org.apache.accumulo.core.client.mock.MockInstance} for this job.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param instanceName
-   *          the Accumulo instance name
-   * @since 1.6.0
-   * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework
-   */
-  @Deprecated
-  public static void setMockInstance(Class<?> implementingClass, Configuration conf, String instanceName) {
-    String key = enumToConfKey(implementingClass, InstanceOpts.TYPE);
-    if (!conf.get(key, "").isEmpty())
-      throw new IllegalStateException("Instance info can only be set once per job; it has already been configured with " + conf.get(key));
-    conf.set(key, "MockInstance");
-
-    checkArgument(instanceName != null, "instanceName is null");
-    conf.set(enumToConfKey(implementingClass, InstanceOpts.NAME), instanceName);
-  }
-
-  /**
    * Initializes an Accumulo {@link Instance} based on the configuration.
    *
    * @param implementingClass
@@ -356,9 +331,7 @@
    */
   public static Instance getInstance(Class<?> implementingClass, Configuration conf) {
     String instanceType = conf.get(enumToConfKey(implementingClass, InstanceOpts.TYPE), "");
-    if ("MockInstance".equals(instanceType))
-      return DeprecationUtil.makeMockInstance(conf.get(enumToConfKey(implementingClass, InstanceOpts.NAME)));
-    else if ("ZooKeeperInstance".equals(instanceType)) {
+    if ("ZooKeeperInstance".equals(instanceType)) {
       return new ZooKeeperInstance(getClientConfiguration(implementingClass, conf));
     } else if (instanceType.isEmpty())
       throw new IllegalStateException("Instance has not been configured for " + implementingClass.getSimpleName());
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
index 986e071..57158a6 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
@@ -26,6 +26,7 @@
 import java.io.DataOutputStream;
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Base64;
 import java.util.Collection;
 import java.util.HashMap;
 import java.util.HashSet;
@@ -49,12 +50,10 @@
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.impl.ClientContext;
 import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.impl.DelegationTokenImpl;
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.client.impl.TabletLocator;
 import org.apache.accumulo.core.client.mapreduce.InputTableConfig;
 import org.apache.accumulo.core.client.sample.SamplerConfiguration;
-import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.PartialKey;
 import org.apache.accumulo.core.data.Range;
@@ -67,8 +66,6 @@
 import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.core.util.Base64;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.core.util.TextUtil;
 import org.apache.hadoop.conf.Configuration;
@@ -214,7 +211,7 @@
       for (Range r : ranges) {
         ByteArrayOutputStream baos = new ByteArrayOutputStream();
         r.write(new DataOutputStream(baos));
-        rangeStrings.add(Base64.encodeBase64String(baos.toByteArray()));
+        rangeStrings.add(Base64.getEncoder().encodeToString(baos.toByteArray()));
       }
       conf.setStrings(enumToConfKey(implementingClass, ScanOpts.RANGES), rangeStrings.toArray(new String[0]));
     } catch (IOException ex) {
@@ -240,7 +237,7 @@
     Collection<String> encodedRanges = conf.getStringCollection(enumToConfKey(implementingClass, ScanOpts.RANGES));
     List<Range> ranges = new ArrayList<>();
     for (String rangeString : encodedRanges) {
-      ByteArrayInputStream bais = new ByteArrayInputStream(Base64.decodeBase64(rangeString.getBytes(UTF_8)));
+      ByteArrayInputStream bais = new ByteArrayInputStream(Base64.getDecoder().decode(rangeString));
       Range range = new Range();
       range.readFields(new DataInputStream(bais));
       ranges.add(range);
@@ -272,7 +269,7 @@
     try {
       while (tokens.hasMoreTokens()) {
         String itstring = tokens.nextToken();
-        ByteArrayInputStream bais = new ByteArrayInputStream(Base64.decodeBase64(itstring.getBytes(UTF_8)));
+        ByteArrayInputStream bais = new ByteArrayInputStream(Base64.getDecoder().decode(itstring));
         list.add(new IteratorSetting(new DataInputStream(bais)));
         bais.close();
       }
@@ -310,9 +307,9 @@
       if (column.getFirst() == null)
         throw new IllegalArgumentException("Column family can not be null");
 
-      String col = Base64.encodeBase64String(TextUtil.getBytes(column.getFirst()));
+      String col = Base64.getEncoder().encodeToString(TextUtil.getBytes(column.getFirst()));
       if (column.getSecond() != null)
-        col += ":" + Base64.encodeBase64String(TextUtil.getBytes(column.getSecond()));
+        col += ":" + Base64.getEncoder().encodeToString(TextUtil.getBytes(column.getSecond()));
       columnStrings.add(col);
     }
 
@@ -352,8 +349,8 @@
 
     for (String col : serialized) {
       int idx = col.indexOf(":");
-      Text cf = new Text(idx < 0 ? Base64.decodeBase64(col.getBytes(UTF_8)) : Base64.decodeBase64(col.substring(0, idx).getBytes(UTF_8)));
-      Text cq = idx < 0 ? null : new Text(Base64.decodeBase64(col.substring(idx + 1).getBytes(UTF_8)));
+      Text cf = new Text(idx < 0 ? Base64.getDecoder().decode(col) : Base64.getDecoder().decode(col.substring(0, idx)));
+      Text cq = idx < 0 ? null : new Text(Base64.getDecoder().decode(col.substring(idx + 1)));
       columns.add(new Pair<>(cf, cq));
     }
     return columns;
@@ -377,7 +374,7 @@
     String newIter;
     try {
       cfg.write(new DataOutputStream(baos));
-      newIter = Base64.encodeBase64String(baos.toByteArray());
+      newIter = Base64.getEncoder().encodeToString(baos.toByteArray());
       baos.close();
     } catch (IOException e) {
       throw new IllegalArgumentException("unable to serialize IteratorSetting");
@@ -607,7 +604,7 @@
     }
 
     String confKey = enumToConfKey(implementingClass, ScanOpts.TABLE_CONFIGS);
-    conf.set(confKey, Base64.encodeBase64String(baos.toByteArray()));
+    conf.set(confKey, Base64.getEncoder().encodeToString(baos.toByteArray()));
   }
 
   /**
@@ -629,7 +626,7 @@
     MapWritable mapWritable = new MapWritable();
     if (configString != null) {
       try {
-        byte[] bytes = Base64.decodeBase64(configString.getBytes(UTF_8));
+        byte[] bytes = Base64.getDecoder().decode(configString);
         ByteArrayInputStream bais = new ByteArrayInputStream(bytes);
         mapWritable.readFields(new DataInputStream(bais));
         bais.close();
@@ -675,9 +672,6 @@
    * @since 1.6.0
    */
   public static TabletLocator getTabletLocator(Class<?> implementingClass, Configuration conf, String tableId) throws TableNotFoundException {
-    String instanceType = conf.get(enumToConfKey(implementingClass, InstanceOpts.TYPE));
-    if ("MockInstance".equals(instanceType))
-      return DeprecationUtil.makeMockLocator();
     Instance instance = getInstance(implementingClass, conf);
     ClientConfiguration clientConf = getClientConfiguration(implementingClass, conf);
     ClientContext context = new ClientContext(instance,
@@ -744,69 +738,6 @@
     }
   }
 
-  // InputFormat doesn't have the equivalent of OutputFormat's checkOutputSpecs(JobContext job)
-  /**
-   * Check whether a configuration is fully configured to be used with an Accumulo {@link org.apache.hadoop.mapreduce.InputFormat}.
-   *
-   * <p>
-   * The implementation (JobContext or JobConf which created the Configuration) needs to be used to extract the proper {@link AuthenticationToken} for
-   * {@link DelegationTokenImpl} support.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @throws IOException
-   *           if the context is improperly configured
-   * @since 1.6.0
-   *
-   * @see #validateInstance(Class, Configuration)
-   * @see #validatePermissions(Class, Configuration, Connector)
-   */
-  @Deprecated
-  public static void validateOptions(Class<?> implementingClass, Configuration conf) throws IOException {
-
-    Map<String,InputTableConfig> inputTableConfigs = getInputTableConfigs(implementingClass, conf);
-    if (!isConnectorInfoSet(implementingClass, conf))
-      throw new IOException("Input info has not been set.");
-    String instanceKey = conf.get(enumToConfKey(implementingClass, InstanceOpts.TYPE));
-    if (!"MockInstance".equals(instanceKey) && !"ZooKeeperInstance".equals(instanceKey))
-      throw new IOException("Instance info has not been set.");
-    // validate that we can connect as configured
-    try {
-      String principal = getPrincipal(implementingClass, conf);
-      AuthenticationToken token = getAuthenticationToken(implementingClass, conf);
-      Connector c = getInstance(implementingClass, conf).getConnector(principal, token);
-      if (!c.securityOperations().authenticateUser(principal, token))
-        throw new IOException("Unable to authenticate user");
-
-      if (getInputTableConfigs(implementingClass, conf).size() == 0)
-        throw new IOException("No table set.");
-
-      for (Map.Entry<String,InputTableConfig> tableConfig : inputTableConfigs.entrySet()) {
-        if (!c.securityOperations().hasTablePermission(getPrincipal(implementingClass, conf), tableConfig.getKey(), TablePermission.READ))
-          throw new IOException("Unable to access table");
-      }
-      for (Map.Entry<String,InputTableConfig> tableConfigEntry : inputTableConfigs.entrySet()) {
-        InputTableConfig tableConfig = tableConfigEntry.getValue();
-        if (!tableConfig.shouldUseLocalIterators()) {
-          if (tableConfig.getIterators() != null) {
-            for (IteratorSetting iter : tableConfig.getIterators()) {
-              if (!c.tableOperations().testClassLoad(tableConfigEntry.getKey(), iter.getIteratorClass(), SortedKeyValueIterator.class.getName()))
-                throw new AccumuloException("Servers are unable to load " + iter.getIteratorClass() + " as a " + SortedKeyValueIterator.class.getName());
-            }
-          }
-        }
-      }
-    } catch (AccumuloException e) {
-      throw new IOException(e);
-    } catch (AccumuloSecurityException e) {
-      throw new IOException(e);
-    } catch (TableNotFoundException e) {
-      throw new IOException(e);
-    }
-  }
-
   /**
    * Returns the {@link org.apache.accumulo.core.client.mapreduce.InputTableConfig} for the configuration based on the properties set using the single-table
    * input methods.
@@ -949,11 +880,11 @@
       throw new RuntimeException(e);
     }
 
-    return Base64.encodeBase64String(baos.toByteArray());
+    return Base64.getEncoder().encodeToString(baos.toByteArray());
   }
 
   private static <T extends Writable> T fromBase64(T writable, String enc) {
-    ByteArrayInputStream bais = new ByteArrayInputStream(Base64.decodeBase64(enc));
+    ByteArrayInputStream bais = new ByteArrayInputStream(Base64.getDecoder().decode(enc));
     DataInputStream dis = new DataInputStream(bais);
     try {
       writable.readFields(dis);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
index fa80831..c6fab6f 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
@@ -25,11 +25,11 @@
 import java.io.InputStreamReader;
 import java.net.URI;
 import java.util.Arrays;
+import java.util.Base64;
 import java.util.Scanner;
 import java.util.TreeSet;
 
 import org.apache.accumulo.core.client.mapreduce.lib.impl.DistributedCacheHelper;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.hadoop.conf.Configurable;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
@@ -93,7 +93,7 @@
             Scanner in = new Scanner(new BufferedReader(new InputStreamReader(new FileInputStream(path.toString()), UTF_8)));
             try {
               while (in.hasNextLine())
-                cutPoints.add(new Text(Base64.decodeBase64(in.nextLine().getBytes(UTF_8))));
+                cutPoints.add(new Text(Base64.getDecoder().decode(in.nextLine())));
             } finally {
               in.close();
             }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/ConfiguratorBase.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/ConfiguratorBase.java
deleted file mode 100644
index 6914071..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/ConfiguratorBase.java
+++ /dev/null
@@ -1,275 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mapreduce.lib.util;
-
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.ClientConfiguration;
-import org.apache.accumulo.core.client.Instance;
-import org.apache.accumulo.core.client.ZooKeeperInstance;
-import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
-import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.AuthenticationTokenSerializer;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.util.StringUtils;
-import org.apache.log4j.Level;
-
-/**
- * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
- * @since 1.5.0
- */
-@Deprecated
-public class ConfiguratorBase {
-
-  /**
-   * Configuration keys for {@link Instance#getConnector(String, AuthenticationToken)}.
-   *
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static enum ConnectorInfo {
-    IS_CONFIGURED, PRINCIPAL, TOKEN, TOKEN_CLASS
-  }
-
-  /**
-   * Configuration keys for {@link Instance}, {@link ZooKeeperInstance}, and {@link org.apache.accumulo.core.client.mock.MockInstance}.
-   *
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  protected static enum InstanceOpts {
-    TYPE, NAME, ZOO_KEEPERS;
-  }
-
-  /**
-   * Configuration keys for general configuration options.
-   *
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  protected static enum GeneralOpts {
-    LOG_LEVEL
-  }
-
-  /**
-   * Provides a configuration key for a given feature enum, prefixed by the implementingClass
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param e
-   *          the enum used to provide the unique part of the configuration key
-   * @return the configuration key
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  protected static String enumToConfKey(Class<?> implementingClass, Enum<?> e) {
-    return implementingClass.getSimpleName() + "." + e.getDeclaringClass().getSimpleName() + "." + StringUtils.camelize(e.name().toLowerCase());
-  }
-
-  /**
-   * Sets the connector information needed to communicate with Accumulo in this job.
-   *
-   * <p>
-   * <b>WARNING:</b> The serialized token is stored in the configuration and shared with all MapReduce tasks. It is BASE64 encoded to provide a charset safe
-   * conversion to a string, and is not intended to be secure.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param principal
-   *          a valid Accumulo user name
-   * @param token
-   *          the user's password
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setConnectorInfo(Class<?> implementingClass, Configuration conf, String principal, AuthenticationToken token)
-      throws AccumuloSecurityException {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.setConnectorInfo(implementingClass, conf, principal, token);
-  }
-
-  /**
-   * Determines if the connector info has already been set for this instance.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return true if the connector info has already been set, false otherwise
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setConnectorInfo(Class, Configuration, String, AuthenticationToken)
-   */
-  @Deprecated
-  public static Boolean isConnectorInfoSet(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.isConnectorInfoSet(implementingClass, conf);
-  }
-
-  /**
-   * Gets the user name from the configuration.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return the principal
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setConnectorInfo(Class, Configuration, String, AuthenticationToken)
-   */
-  @Deprecated
-  public static String getPrincipal(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.getPrincipal(implementingClass, conf);
-  }
-
-  /**
-   * DON'T USE THIS. No, really, don't use this. You already have an {@link AuthenticationToken} with
-   * {@link org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase#getAuthenticationToken(Class, Configuration)}. You don't need to construct it
-   * yourself.
-   * <p>
-   * Gets the serialized token class from the configuration.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return the principal
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setConnectorInfo(Class, Configuration, String, AuthenticationToken)
-   */
-  @Deprecated
-  public static String getTokenClass(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.getAuthenticationToken(implementingClass, conf).getClass().getName();
-  }
-
-  /**
-   * DON'T USE THIS. No, really, don't use this. You already have an {@link AuthenticationToken} with
-   * {@link org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase#getAuthenticationToken(Class, Configuration)}. You don't need to construct it
-   * yourself.
-   * <p>
-   * Gets the password from the configuration. WARNING: The password is stored in the Configuration and shared with all MapReduce tasks; It is BASE64 encoded to
-   * provide a charset safe conversion to a string, and is not intended to be secure.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return the decoded principal's authentication token
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setConnectorInfo(Class, Configuration, String, AuthenticationToken)
-   */
-  @Deprecated
-  public static byte[] getToken(Class<?> implementingClass, Configuration conf) {
-    return AuthenticationTokenSerializer.serialize(org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.getAuthenticationToken(
-        implementingClass, conf));
-  }
-
-  /**
-   * Configures a {@link ZooKeeperInstance} for this job.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param instanceName
-   *          the Accumulo instance name
-   * @param zooKeepers
-   *          a comma-separated list of zookeeper servers
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setZooKeeperInstance(Class<?> implementingClass, Configuration conf, String instanceName, String zooKeepers) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.setZooKeeperInstance(implementingClass, conf,
-        new ClientConfiguration().withInstance(instanceName).withZkHosts(zooKeepers));
-  }
-
-  /**
-   * Configures a {@link org.apache.accumulo.core.client.mock.MockInstance} for this job.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param instanceName
-   *          the Accumulo instance name
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setMockInstance(Class<?> implementingClass, Configuration conf, String instanceName) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.setMockInstance(implementingClass, conf, instanceName);
-  }
-
-  /**
-   * Initializes an Accumulo {@link Instance} based on the configuration.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return an Accumulo instance
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setZooKeeperInstance(Class, Configuration, String, String)
-   */
-  @Deprecated
-  public static Instance getInstance(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.getInstance(implementingClass, conf);
-  }
-
-  /**
-   * Sets the log level for this job.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param level
-   *          the logging level
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setLogLevel(Class<?> implementingClass, Configuration conf, Level level) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.setLogLevel(implementingClass, conf, level);
-  }
-
-  /**
-   * Gets the log level from this configuration.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return the log level
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setLogLevel(Class, Configuration, Level)
-   */
-  @Deprecated
-  public static Level getLogLevel(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.getLogLevel(implementingClass, conf);
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java
deleted file mode 100644
index b4f6b8a..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java
+++ /dev/null
@@ -1,170 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mapreduce.lib.util;
-
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.Property;
-import org.apache.hadoop.conf.Configuration;
-
-/**
- * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
- * @since 1.5.0
- */
-@Deprecated
-public class FileOutputConfigurator extends ConfiguratorBase {
-
-  /**
-   * Configuration keys for {@link AccumuloConfiguration}.
-   *
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static enum Opts {
-    ACCUMULO_PROPERTIES;
-  }
-
-  /**
-   * The supported Accumulo properties we set in this OutputFormat, that change the behavior of the RecordWriter.<br>
-   * These properties correspond to the supported public static setter methods available to this class.
-   *
-   * @param property
-   *          the Accumulo property to check
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  protected static Boolean isSupportedAccumuloProperty(Property property) {
-    switch (property) {
-      case TABLE_FILE_COMPRESSION_TYPE:
-      case TABLE_FILE_COMPRESSED_BLOCK_SIZE:
-      case TABLE_FILE_BLOCK_SIZE:
-      case TABLE_FILE_COMPRESSED_BLOCK_SIZE_INDEX:
-      case TABLE_FILE_REPLICATION:
-        return true;
-      default:
-        return false;
-    }
-  }
-
-  /**
-   * This helper method provides an AccumuloConfiguration object constructed from the Accumulo defaults, and overridden with Accumulo properties that have been
-   * stored in the Job's configuration.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static AccumuloConfiguration getAccumuloConfiguration(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.FileOutputConfigurator.getAccumuloConfiguration(implementingClass, conf);
-  }
-
-  /**
-   * Sets the compression type to use for data blocks. Specifying a compression may require additional libraries to be available to your Job.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param compressionType
-   *          one of "none", "gz", "lzo", or "snappy"
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setCompressionType(Class<?> implementingClass, Configuration conf, String compressionType) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.FileOutputConfigurator.setCompressionType(implementingClass, conf, compressionType);
-  }
-
-  /**
-   * Sets the size for data blocks within each file.<br>
-   * Data blocks are a span of key/value pairs stored in the file that are compressed and indexed as a group.
-   *
-   * <p>
-   * Making this value smaller may increase seek performance, but at the cost of increasing the size of the indexes (which can also affect seek performance).
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param dataBlockSize
-   *          the block size, in bytes
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setDataBlockSize(Class<?> implementingClass, Configuration conf, long dataBlockSize) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.FileOutputConfigurator.setDataBlockSize(implementingClass, conf, dataBlockSize);
-  }
-
-  /**
-   * Sets the size for file blocks in the file system; file blocks are managed, and replicated, by the underlying file system.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param fileBlockSize
-   *          the block size, in bytes
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setFileBlockSize(Class<?> implementingClass, Configuration conf, long fileBlockSize) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.FileOutputConfigurator.setFileBlockSize(implementingClass, conf, fileBlockSize);
-  }
-
-  /**
-   * Sets the size for index blocks within each file; smaller blocks means a deeper index hierarchy within the file, while larger blocks mean a more shallow
-   * index hierarchy within the file. This can affect the performance of queries.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param indexBlockSize
-   *          the block size, in bytes
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setIndexBlockSize(Class<?> implementingClass, Configuration conf, long indexBlockSize) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.FileOutputConfigurator.setIndexBlockSize(implementingClass, conf, indexBlockSize);
-  }
-
-  /**
-   * Sets the file system replication factor for the resulting file, overriding the file system default.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param replication
-   *          the number of replicas for produced files
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setReplication(Class<?> implementingClass, Configuration conf, int replication) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.FileOutputConfigurator.setReplication(implementingClass, conf, replication);
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/InputConfigurator.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/InputConfigurator.java
deleted file mode 100644
index b85253c..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/InputConfigurator.java
+++ /dev/null
@@ -1,461 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mapreduce.lib.util;
-
-import java.io.IOException;
-import java.util.Collection;
-import java.util.List;
-import java.util.Set;
-
-import org.apache.accumulo.core.client.ClientSideIteratorScanner;
-import org.apache.accumulo.core.client.IsolatedScanner;
-import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.impl.Tables;
-import org.apache.accumulo.core.client.impl.TabletLocator;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.Pair;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.io.Text;
-
-/**
- * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
- * @since 1.5.0
- */
-@Deprecated
-public class InputConfigurator extends ConfiguratorBase {
-
-  /**
-   * Configuration keys for {@link Scanner}.
-   *
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static enum ScanOpts {
-    TABLE_NAME, AUTHORIZATIONS, RANGES, COLUMNS, ITERATORS
-  }
-
-  /**
-   * Configuration keys for various features.
-   *
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static enum Features {
-    AUTO_ADJUST_RANGES, SCAN_ISOLATION, USE_LOCAL_ITERATORS, SCAN_OFFLINE
-  }
-
-  /**
-   * Sets the name of the input table, over which this job will scan.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param tableName
-   *          the table to use when the tablename is null in the write call
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setInputTableName(Class<?> implementingClass, Configuration conf, String tableName) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.setInputTableName(implementingClass, conf, tableName);
-  }
-
-  /**
-   * Gets the table name from the configuration.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return the table name
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setInputTableName(Class, Configuration, String)
-   */
-  @Deprecated
-  public static String getInputTableName(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.getInputTableName(implementingClass, conf);
-  }
-
-  /**
-   * Sets the {@link Authorizations} used to scan. Must be a subset of the user's authorization. Defaults to the empty set.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param auths
-   *          the user's authorizations
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setScanAuthorizations(Class<?> implementingClass, Configuration conf, Authorizations auths) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.setScanAuthorizations(implementingClass, conf, auths);
-  }
-
-  /**
-   * Gets the authorizations to set for the scans from the configuration.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return the Accumulo scan authorizations
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setScanAuthorizations(Class, Configuration, Authorizations)
-   */
-  @Deprecated
-  public static Authorizations getScanAuthorizations(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.getScanAuthorizations(implementingClass, conf);
-  }
-
-  /**
-   * Sets the input ranges to scan for this job. If not set, the entire table will be scanned.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param ranges
-   *          the ranges that will be mapped over
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setRanges(Class<?> implementingClass, Configuration conf, Collection<Range> ranges) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.setRanges(implementingClass, conf, ranges);
-  }
-
-  /**
-   * Gets the ranges to scan over from a job.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return the ranges
-   * @throws IOException
-   *           if the ranges have been encoded improperly
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setRanges(Class, Configuration, Collection)
-   */
-  @Deprecated
-  public static List<Range> getRanges(Class<?> implementingClass, Configuration conf) throws IOException {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.getRanges(implementingClass, conf);
-  }
-
-  /**
-   * Restricts the columns that will be mapped over for this job.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param columnFamilyColumnQualifierPairs
-   *          a pair of {@link Text} objects corresponding to column family and column qualifier. If the column qualifier is null, the entire column family is
-   *          selected. An empty set is the default and is equivalent to scanning the all columns.
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void fetchColumns(Class<?> implementingClass, Configuration conf, Collection<Pair<Text,Text>> columnFamilyColumnQualifierPairs) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.fetchColumns(implementingClass, conf, columnFamilyColumnQualifierPairs);
-  }
-
-  /**
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   */
-  @Deprecated
-  public static String[] serializeColumns(Collection<Pair<Text,Text>> columnFamilyColumnQualifierPairs) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.serializeColumns(columnFamilyColumnQualifierPairs);
-  }
-
-  /**
-   * Gets the columns to be mapped over from this job.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return a set of columns
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #fetchColumns(Class, Configuration, Collection)
-   */
-  @Deprecated
-  public static Set<Pair<Text,Text>> getFetchedColumns(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.getFetchedColumns(implementingClass, conf);
-  }
-
-  /**
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   */
-  @Deprecated
-  public static Set<Pair<Text,Text>> deserializeFetchedColumns(Collection<String> serialized) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.deserializeFetchedColumns(serialized);
-  }
-
-  /**
-   * Encode an iterator on the input for this job.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param cfg
-   *          the configuration of the iterator
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void addIterator(Class<?> implementingClass, Configuration conf, IteratorSetting cfg) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.addIterator(implementingClass, conf, cfg);
-  }
-
-  /**
-   * Gets a list of the iterator settings (for iterators to apply to a scanner) from this configuration.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return a list of iterators
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #addIterator(Class, Configuration, IteratorSetting)
-   */
-  @Deprecated
-  public static List<IteratorSetting> getIterators(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.getIterators(implementingClass, conf);
-  }
-
-  /**
-   * Controls the automatic adjustment of ranges for this job. This feature merges overlapping ranges, then splits them to align with tablet boundaries.
-   * Disabling this feature will cause exactly one Map task to be created for each specified range. The default setting is enabled. *
-   *
-   * <p>
-   * By default, this feature is <b>enabled</b>.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param enableFeature
-   *          the feature is enabled if true, disabled otherwise
-   * @see #setRanges(Class, Configuration, Collection)
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setAutoAdjustRanges(Class<?> implementingClass, Configuration conf, boolean enableFeature) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.setAutoAdjustRanges(implementingClass, conf, enableFeature);
-  }
-
-  /**
-   * Determines whether a configuration has auto-adjust ranges enabled.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return false if the feature is disabled, true otherwise
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setAutoAdjustRanges(Class, Configuration, boolean)
-   */
-  @Deprecated
-  public static Boolean getAutoAdjustRanges(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.getAutoAdjustRanges(implementingClass, conf);
-  }
-
-  /**
-   * Controls the use of the {@link IsolatedScanner} in this job.
-   *
-   * <p>
-   * By default, this feature is <b>disabled</b>.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param enableFeature
-   *          the feature is enabled if true, disabled otherwise
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setScanIsolation(Class<?> implementingClass, Configuration conf, boolean enableFeature) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.setScanIsolation(implementingClass, conf, enableFeature);
-  }
-
-  /**
-   * Determines whether a configuration has isolation enabled.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return true if the feature is enabled, false otherwise
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setScanIsolation(Class, Configuration, boolean)
-   */
-  @Deprecated
-  public static Boolean isIsolated(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.isIsolated(implementingClass, conf);
-  }
-
-  /**
-   * Controls the use of the {@link ClientSideIteratorScanner} in this job. Enabling this feature will cause the iterator stack to be constructed within the Map
-   * task, rather than within the Accumulo TServer. To use this feature, all classes needed for those iterators must be available on the classpath for the task.
-   *
-   * <p>
-   * By default, this feature is <b>disabled</b>.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param enableFeature
-   *          the feature is enabled if true, disabled otherwise
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setLocalIterators(Class<?> implementingClass, Configuration conf, boolean enableFeature) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.setLocalIterators(implementingClass, conf, enableFeature);
-  }
-
-  /**
-   * Determines whether a configuration uses local iterators.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return true if the feature is enabled, false otherwise
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setLocalIterators(Class, Configuration, boolean)
-   */
-  @Deprecated
-  public static Boolean usesLocalIterators(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.usesLocalIterators(implementingClass, conf);
-  }
-
-  /**
-   * Enable reading offline tables. By default, this feature is disabled and only online tables are scanned. This will make the map reduce job directly read the
-   * table's files. If the table is not offline, then the job will fail. If the table comes online during the map reduce job, it is likely that the job will
-   * fail.
-   *
-   * <p>
-   * To use this option, the map reduce user will need access to read the Accumulo directory in HDFS.
-   *
-   * <p>
-   * Reading the offline table will create the scan time iterator stack in the map process. So any iterators that are configured for the table will need to be
-   * on the mapper's classpath. The accumulo-site.xml may need to be on the mapper's classpath if HDFS or the Accumulo directory in HDFS are non-standard.
-   *
-   * <p>
-   * One way to use this feature is to clone a table, take the clone offline, and use the clone as the input table for a map reduce job. If you plan to map
-   * reduce over the data many times, it may be better to the compact the table, clone it, take it offline, and use the clone for all map reduce jobs. The
-   * reason to do this is that compaction will reduce each tablet in the table to one file, and it is faster to read from one file.
-   *
-   * <p>
-   * There are two possible advantages to reading a tables file directly out of HDFS. First, you may see better read performance. Second, it will support
-   * speculative execution better. When reading an online table speculative execution can put more load on an already slow tablet server.
-   *
-   * <p>
-   * By default, this feature is <b>disabled</b>.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param enableFeature
-   *          the feature is enabled if true, disabled otherwise
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setOfflineTableScan(Class<?> implementingClass, Configuration conf, boolean enableFeature) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.setOfflineTableScan(implementingClass, conf, enableFeature);
-  }
-
-  /**
-   * Determines whether a configuration has the offline table scan feature enabled.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return true if the feature is enabled, false otherwise
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setOfflineTableScan(Class, Configuration, boolean)
-   */
-  @Deprecated
-  public static Boolean isOfflineScan(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.isOfflineScan(implementingClass, conf);
-  }
-
-  /**
-   * Initializes an Accumulo {@link TabletLocator} based on the configuration.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return an Accumulo tablet locator
-   * @throws TableNotFoundException
-   *           if the table name set on the configuration doesn't exist
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static TabletLocator getTabletLocator(Class<?> implementingClass, Configuration conf) throws TableNotFoundException {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.getTabletLocator(implementingClass, conf,
-        Tables.getTableId(getInstance(implementingClass, conf), getInputTableName(implementingClass, conf)));
-  }
-
-  // InputFormat doesn't have the equivalent of OutputFormat's checkOutputSpecs(JobContext job)
-  /**
-   * Check whether a configuration is fully configured to be used with an Accumulo {@link org.apache.hadoop.mapreduce.InputFormat}.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @throws IOException
-   *           if the context is improperly configured
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void validateOptions(Class<?> implementingClass, Configuration conf) throws IOException {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validateOptions(implementingClass, conf);
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/OutputConfigurator.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/OutputConfigurator.java
deleted file mode 100644
index 39163a6..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/OutputConfigurator.java
+++ /dev/null
@@ -1,196 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mapreduce.lib.util;
-
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.hadoop.conf.Configuration;
-
-/**
- * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
- * @since 1.5.0
- */
-@Deprecated
-public class OutputConfigurator extends ConfiguratorBase {
-
-  /**
-   * Configuration keys for {@link BatchWriter}.
-   *
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static enum WriteOpts {
-    DEFAULT_TABLE_NAME, BATCH_WRITER_CONFIG
-  }
-
-  /**
-   * Configuration keys for various features.
-   *
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static enum Features {
-    CAN_CREATE_TABLES, SIMULATION_MODE
-  }
-
-  /**
-   * Sets the default table name to use if one emits a null in place of a table name for a given mutation. Table names can only be alpha-numeric and
-   * underscores.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param tableName
-   *          the table to use when the tablename is null in the write call
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setDefaultTableName(Class<?> implementingClass, Configuration conf, String tableName) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.OutputConfigurator.setDefaultTableName(implementingClass, conf, tableName);
-  }
-
-  /**
-   * Gets the default table name from the configuration.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return the default table name
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setDefaultTableName(Class, Configuration, String)
-   */
-  @Deprecated
-  public static String getDefaultTableName(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.OutputConfigurator.getDefaultTableName(implementingClass, conf);
-  }
-
-  /**
-   * Sets the configuration for for the job's {@link BatchWriter} instances. If not set, a new {@link BatchWriterConfig}, with sensible built-in defaults is
-   * used. Setting the configuration multiple times overwrites any previous configuration.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param bwConfig
-   *          the configuration for the {@link BatchWriter}
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setBatchWriterOptions(Class<?> implementingClass, Configuration conf, BatchWriterConfig bwConfig) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.OutputConfigurator.setBatchWriterOptions(implementingClass, conf, bwConfig);
-  }
-
-  /**
-   * Gets the {@link BatchWriterConfig} settings.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return the configuration object
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setBatchWriterOptions(Class, Configuration, BatchWriterConfig)
-   */
-  @Deprecated
-  public static BatchWriterConfig getBatchWriterOptions(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.OutputConfigurator.getBatchWriterOptions(implementingClass, conf);
-  }
-
-  /**
-   * Sets the directive to create new tables, as necessary. Table names can only be alpha-numeric and underscores.
-   *
-   * <p>
-   * By default, this feature is <b>disabled</b>.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param enableFeature
-   *          the feature is enabled if true, disabled otherwise
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setCreateTables(Class<?> implementingClass, Configuration conf, boolean enableFeature) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.OutputConfigurator.setCreateTables(implementingClass, conf, enableFeature);
-  }
-
-  /**
-   * Determines whether tables are permitted to be created as needed.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return true if the feature is disabled, false otherwise
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setCreateTables(Class, Configuration, boolean)
-   */
-  @Deprecated
-  public static Boolean canCreateTables(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.OutputConfigurator.canCreateTables(implementingClass, conf);
-  }
-
-  /**
-   * Sets the directive to use simulation mode for this job. In simulation mode, no output is produced. This is useful for testing.
-   *
-   * <p>
-   * By default, this feature is <b>disabled</b>.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @param enableFeature
-   *          the feature is enabled if true, disabled otherwise
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   */
-  @Deprecated
-  public static void setSimulationMode(Class<?> implementingClass, Configuration conf, boolean enableFeature) {
-    org.apache.accumulo.core.client.mapreduce.lib.impl.OutputConfigurator.setSimulationMode(implementingClass, conf, enableFeature);
-  }
-
-  /**
-   * Determines whether this feature is enabled.
-   *
-   * @param implementingClass
-   *          the class whose name will be used as a prefix for the property configuration key
-   * @param conf
-   *          the Hadoop configuration object to configure
-   * @return true if the feature is enabled, false otherwise
-   * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
-   * @since 1.5.0
-   * @see #setSimulationMode(Class, Configuration, boolean)
-   */
-  @Deprecated
-  public static Boolean getSimulationMode(Class<?> implementingClass, Configuration conf) {
-    return org.apache.accumulo.core.client.mapreduce.lib.impl.OutputConfigurator.getSimulationMode(implementingClass, conf);
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/package-info.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/package-info.java
deleted file mode 100644
index 269ffea..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/package-info.java
+++ /dev/null
@@ -1,22 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-/**
- * @deprecated since 1.6.0; This package was moved out of the public API.
- * @since 1.5.0
- */
-package org.apache.accumulo.core.client.mapreduce.lib.util;
-
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/IteratorAdapter.java b/core/src/main/java/org/apache/accumulo/core/client/mock/IteratorAdapter.java
deleted file mode 100644
index d88dac9..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/IteratorAdapter.java
+++ /dev/null
@@ -1,33 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
-
-/**
- * @deprecated since 1.8.0; use {@link org.apache.accumulo.core.iterators.IteratorAdapter} instead.
- */
-@Deprecated
-public class IteratorAdapter extends org.apache.accumulo.core.iterators.IteratorAdapter {
-
-  public IteratorAdapter(SortedKeyValueIterator<Key,Value> inner) {
-    super(inner);
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockAccumulo.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockAccumulo.java
deleted file mode 100644
index f362add..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockAccumulo.java
+++ /dev/null
@@ -1,148 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.util.Collection;
-import java.util.Collections;
-import java.util.EnumSet;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.SortedSet;
-import java.util.concurrent.atomic.AtomicInteger;
-
-import org.apache.accumulo.core.client.BatchScanner;
-import org.apache.accumulo.core.client.admin.TimeType;
-import org.apache.accumulo.core.client.impl.Namespaces;
-import org.apache.accumulo.core.client.impl.Tables;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.metadata.MetadataTable;
-import org.apache.accumulo.core.metadata.RootTable;
-import org.apache.accumulo.core.replication.ReplicationTable;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.security.NamespacePermission;
-import org.apache.accumulo.core.security.SystemPermission;
-import org.apache.accumulo.core.security.TablePermission;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.io.Text;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-public class MockAccumulo {
-  final Map<String,MockTable> tables = new HashMap<>();
-  final Map<String,MockNamespace> namespaces = new HashMap<>();
-  final Map<String,String> systemProperties = new HashMap<>();
-  Map<String,MockUser> users = new HashMap<>();
-  final FileSystem fs;
-  final AtomicInteger tableIdCounter = new AtomicInteger(0);
-
-  @Deprecated
-  MockAccumulo(FileSystem fs) {
-    MockUser root = new MockUser("root", new PasswordToken(new byte[0]), Authorizations.EMPTY);
-    root.permissions.add(SystemPermission.SYSTEM);
-    users.put(root.name, root);
-    namespaces.put(Namespaces.DEFAULT_NAMESPACE, new MockNamespace());
-    namespaces.put(Namespaces.ACCUMULO_NAMESPACE, new MockNamespace());
-    createTable("root", RootTable.NAME, true, TimeType.LOGICAL);
-    createTable("root", MetadataTable.NAME, true, TimeType.LOGICAL);
-    createTable("root", ReplicationTable.NAME, true, TimeType.LOGICAL);
-    this.fs = fs;
-  }
-
-  public FileSystem getFileSystem() {
-    return fs;
-  }
-
-  void setProperty(String key, String value) {
-    systemProperties.put(key, value);
-  }
-
-  String removeProperty(String key) {
-    return systemProperties.remove(key);
-  }
-
-  public void addMutation(String table, Mutation m) {
-    MockTable t = tables.get(table);
-    t.addMutation(m);
-  }
-
-  public BatchScanner createBatchScanner(String tableName, Authorizations authorizations) {
-    return new MockBatchScanner(tables.get(tableName), authorizations);
-  }
-
-  public void createTable(String username, String tableName, boolean useVersions, TimeType timeType) {
-    Map<String,String> opts = Collections.emptyMap();
-    createTable(username, tableName, useVersions, timeType, opts);
-  }
-
-  public void createTable(String username, String tableName, boolean useVersions, TimeType timeType, Map<String,String> properties) {
-    String namespace = Tables.qualify(tableName).getFirst();
-
-    if (!namespaceExists(namespace)) {
-      return;
-    }
-
-    MockNamespace n = namespaces.get(namespace);
-    MockTable t = new MockTable(n, useVersions, timeType, Integer.toString(tableIdCounter.incrementAndGet()), properties);
-    t.userPermissions.put(username, EnumSet.allOf(TablePermission.class));
-    t.setNamespaceName(namespace);
-    t.setNamespace(n);
-    tables.put(tableName, t);
-  }
-
-  public void createTable(String username, String tableName, TimeType timeType, Map<String,String> properties) {
-    String namespace = Tables.qualify(tableName).getFirst();
-    HashMap<String,String> props = new HashMap<>(properties);
-
-    if (!namespaceExists(namespace)) {
-      return;
-    }
-
-    MockNamespace n = namespaces.get(namespace);
-    MockTable t = new MockTable(n, timeType, Integer.toString(tableIdCounter.incrementAndGet()), props);
-    t.userPermissions.put(username, EnumSet.allOf(TablePermission.class));
-    t.setNamespaceName(namespace);
-    t.setNamespace(n);
-    tables.put(tableName, t);
-  }
-
-  public void createNamespace(String username, String namespace) {
-    if (!namespaceExists(namespace)) {
-      MockNamespace n = new MockNamespace();
-      n.userPermissions.put(username, EnumSet.allOf(NamespacePermission.class));
-      namespaces.put(namespace, n);
-    }
-  }
-
-  public void addSplits(String tableName, SortedSet<Text> partitionKeys) {
-    tables.get(tableName).addSplits(partitionKeys);
-  }
-
-  public Collection<Text> getSplits(String tableName) {
-    return tables.get(tableName).getSplits();
-  }
-
-  public void merge(String tableName, Text start, Text end) {
-    tables.get(tableName).merge(start, end);
-  }
-
-  private boolean namespaceExists(String namespace) {
-    return namespaces.containsKey(namespace);
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchDeleter.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchDeleter.java
deleted file mode 100644
index bacd844..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchDeleter.java
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.util.Iterator;
-import java.util.Map.Entry;
-
-import org.apache.accumulo.core.client.BatchDeleter;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.MutationsRejectedException;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.security.ColumnVisibility;
-
-/**
- * {@link BatchDeleter} for a {@link MockAccumulo} instance. Behaves similarly to a regular {@link BatchDeleter}, with a few exceptions:
- * <ol>
- * <li>There is no waiting for memory to fill before flushing</li>
- * <li>Only one thread is used for writing</li>
- * </ol>
- *
- * Otherwise, it behaves as expected.
- *
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-public class MockBatchDeleter extends MockBatchScanner implements BatchDeleter {
-
-  private final MockAccumulo acc;
-  private final String tableName;
-
-  /**
-   * Create a {@link BatchDeleter} for the specified instance on the specified table where the writer uses the specified {@link Authorizations}.
-   */
-  public MockBatchDeleter(MockAccumulo acc, String tableName, Authorizations auths) {
-    super(acc.tables.get(tableName), auths);
-    this.acc = acc;
-    this.tableName = tableName;
-  }
-
-  @Override
-  public void delete() throws MutationsRejectedException, TableNotFoundException {
-
-    BatchWriter writer = new MockBatchWriter(acc, tableName);
-    try {
-      Iterator<Entry<Key,Value>> iter = super.iterator();
-      while (iter.hasNext()) {
-        Entry<Key,Value> next = iter.next();
-        Key k = next.getKey();
-        Mutation m = new Mutation(k.getRow());
-        m.putDelete(k.getColumnFamily(), k.getColumnQualifier(), new ColumnVisibility(k.getColumnVisibility()), k.getTimestamp());
-        writer.addMutation(m);
-      }
-    } finally {
-      writer.close();
-    }
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchScanner.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchScanner.java
deleted file mode 100644
index 1ea27b5..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchScanner.java
+++ /dev/null
@@ -1,79 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map.Entry;
-
-import org.apache.accumulo.core.client.BatchScanner;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
-import org.apache.accumulo.core.iterators.SortedMapIterator;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.commons.collections.iterators.IteratorChain;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-public class MockBatchScanner extends MockScannerBase implements BatchScanner {
-
-  List<Range> ranges = null;
-
-  public MockBatchScanner(MockTable mockTable, Authorizations authorizations) {
-    super(mockTable, authorizations);
-  }
-
-  @Override
-  public void setRanges(Collection<Range> ranges) {
-    if (ranges == null || ranges.size() == 0) {
-      throw new IllegalArgumentException("ranges must be non null and contain at least 1 range");
-    }
-
-    this.ranges = new ArrayList<>(ranges);
-  }
-
-  @SuppressWarnings("unchecked")
-  @Override
-  public Iterator<Entry<Key,Value>> iterator() {
-    if (ranges == null) {
-      throw new IllegalStateException("ranges not set");
-    }
-
-    IteratorChain chain = new IteratorChain();
-    for (Range range : ranges) {
-      SortedKeyValueIterator<Key,Value> i = new SortedMapIterator(table.table);
-      try {
-        i = createFilter(i);
-        i.seek(range, createColumnBSS(fetchedColumns), !fetchedColumns.isEmpty());
-        chain.addIterator(new IteratorAdapter(i));
-      } catch (IOException e) {
-        throw new RuntimeException(e);
-      }
-    }
-    return chain;
-  }
-
-  @Override
-  public void close() {}
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchWriter.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchWriter.java
deleted file mode 100644
index 53a0ddc..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchWriter.java
+++ /dev/null
@@ -1,59 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import static com.google.common.base.Preconditions.checkArgument;
-
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.MutationsRejectedException;
-import org.apache.accumulo.core.data.Mutation;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-public class MockBatchWriter implements BatchWriter {
-
-  final String tablename;
-  final MockAccumulo acu;
-
-  MockBatchWriter(MockAccumulo acu, String tablename) {
-    this.acu = acu;
-    this.tablename = tablename;
-  }
-
-  @Override
-  public void addMutation(Mutation m) throws MutationsRejectedException {
-    checkArgument(m != null, "m is null");
-    acu.addMutation(tablename, m);
-  }
-
-  @Override
-  public void addMutations(Iterable<Mutation> iterable) throws MutationsRejectedException {
-    checkArgument(iterable != null, "iterable is null");
-    for (Mutation m : iterable) {
-      acu.addMutation(tablename, m);
-    }
-  }
-
-  @Override
-  public void flush() throws MutationsRejectedException {}
-
-  @Override
-  public void close() throws MutationsRejectedException {}
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockConfiguration.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockConfiguration.java
deleted file mode 100644
index 244d6f8..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockConfiguration.java
+++ /dev/null
@@ -1,72 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.util.Map;
-import java.util.Map.Entry;
-
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.Property;
-
-import com.google.common.base.Predicate;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-class MockConfiguration extends AccumuloConfiguration {
-  Map<String,String> map;
-
-  MockConfiguration(Map<String,String> settings) {
-    map = settings;
-  }
-
-  public void put(String k, String v) {
-    map.put(k, v);
-  }
-
-  @Override
-  public String get(Property property) {
-    return map.get(property.getKey());
-  }
-
-  /**
-   * Don't use this method. It has been deprecated. Its parameters are not public API and subject to change.
-   *
-   * @deprecated since 1.7.0; use {@link #getProperties(Map, Predicate)} instead.
-   */
-  @Deprecated
-  public void getProperties(Map<String,String> props, final PropertyFilter filter) {
-    // convert PropertyFilter to Predicate
-    getProperties(props, new Predicate<String>() {
-
-      @Override
-      public boolean apply(String input) {
-        return filter.accept(input);
-      }
-    });
-  }
-
-  @Override
-  public void getProperties(Map<String,String> props, Predicate<String> filter) {
-    for (Entry<String,String> entry : map.entrySet()) {
-      if (filter.apply(entry.getKey())) {
-        props.put(entry.getKey(), entry.getValue());
-      }
-    }
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockConnector.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockConnector.java
deleted file mode 100644
index 9b5601b..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockConnector.java
+++ /dev/null
@@ -1,162 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.util.concurrent.TimeUnit;
-
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.BatchDeleter;
-import org.apache.accumulo.core.client.BatchScanner;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.ConditionalWriter;
-import org.apache.accumulo.core.client.ConditionalWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Instance;
-import org.apache.accumulo.core.client.MultiTableBatchWriter;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.admin.InstanceOperations;
-import org.apache.accumulo.core.client.admin.NamespaceOperations;
-import org.apache.accumulo.core.client.admin.ReplicationOperations;
-import org.apache.accumulo.core.client.admin.SecurityOperations;
-import org.apache.accumulo.core.client.admin.TableOperations;
-import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode;
-import org.apache.accumulo.core.client.security.tokens.NullToken;
-import org.apache.accumulo.core.security.Authorizations;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-public class MockConnector extends Connector {
-
-  String username;
-  private final MockAccumulo acu;
-  private final Instance instance;
-
-  MockConnector(String username, MockInstance instance) throws AccumuloSecurityException {
-    this(new Credentials(username, new NullToken()), new MockAccumulo(MockInstance.getDefaultFileSystem()), instance);
-  }
-
-  MockConnector(Credentials credentials, MockAccumulo acu, MockInstance instance) throws AccumuloSecurityException {
-    if (credentials.getToken().isDestroyed())
-      throw new AccumuloSecurityException(credentials.getPrincipal(), SecurityErrorCode.TOKEN_EXPIRED);
-    this.username = credentials.getPrincipal();
-    this.acu = acu;
-    this.instance = instance;
-  }
-
-  @Override
-  public BatchScanner createBatchScanner(String tableName, Authorizations authorizations, int numQueryThreads) throws TableNotFoundException {
-    if (acu.tables.get(tableName) == null)
-      throw new TableNotFoundException(tableName, tableName, "no such table");
-    return acu.createBatchScanner(tableName, authorizations);
-  }
-
-  @Deprecated
-  @Override
-  public BatchDeleter createBatchDeleter(String tableName, Authorizations authorizations, int numQueryThreads, long maxMemory, long maxLatency,
-      int maxWriteThreads) throws TableNotFoundException {
-    if (acu.tables.get(tableName) == null)
-      throw new TableNotFoundException(tableName, tableName, "no such table");
-    return new MockBatchDeleter(acu, tableName, authorizations);
-  }
-
-  @Override
-  public BatchDeleter createBatchDeleter(String tableName, Authorizations authorizations, int numQueryThreads, BatchWriterConfig config)
-      throws TableNotFoundException {
-    return createBatchDeleter(tableName, authorizations, numQueryThreads, config.getMaxMemory(), config.getMaxLatency(TimeUnit.MILLISECONDS),
-        config.getMaxWriteThreads());
-  }
-
-  @Deprecated
-  @Override
-  public BatchWriter createBatchWriter(String tableName, long maxMemory, long maxLatency, int maxWriteThreads) throws TableNotFoundException {
-    if (acu.tables.get(tableName) == null)
-      throw new TableNotFoundException(tableName, tableName, "no such table");
-    return new MockBatchWriter(acu, tableName);
-  }
-
-  @Override
-  public BatchWriter createBatchWriter(String tableName, BatchWriterConfig config) throws TableNotFoundException {
-    return createBatchWriter(tableName, config.getMaxMemory(), config.getMaxLatency(TimeUnit.MILLISECONDS), config.getMaxWriteThreads());
-  }
-
-  @Deprecated
-  @Override
-  public MultiTableBatchWriter createMultiTableBatchWriter(long maxMemory, long maxLatency, int maxWriteThreads) {
-    return new MockMultiTableBatchWriter(acu);
-  }
-
-  @Override
-  public MultiTableBatchWriter createMultiTableBatchWriter(BatchWriterConfig config) {
-    return createMultiTableBatchWriter(config.getMaxMemory(), config.getMaxLatency(TimeUnit.MILLISECONDS), config.getMaxWriteThreads());
-  }
-
-  @Override
-  public Scanner createScanner(String tableName, Authorizations authorizations) throws TableNotFoundException {
-    MockTable table = acu.tables.get(tableName);
-    if (table == null)
-      throw new TableNotFoundException(tableName, tableName, "no such table");
-    return new MockScanner(table, authorizations);
-  }
-
-  @Override
-  public Instance getInstance() {
-    return instance;
-  }
-
-  @Override
-  public String whoami() {
-    return username;
-  }
-
-  @Override
-  public TableOperations tableOperations() {
-    return new MockTableOperations(acu, username);
-  }
-
-  @Override
-  public SecurityOperations securityOperations() {
-    return new MockSecurityOperations(acu);
-  }
-
-  @Override
-  public InstanceOperations instanceOperations() {
-    return new MockInstanceOperations(acu);
-  }
-
-  @Override
-  public NamespaceOperations namespaceOperations() {
-    return new MockNamespaceOperations(acu, username);
-  }
-
-  @Override
-  public ConditionalWriter createConditionalWriter(String tableName, ConditionalWriterConfig config) throws TableNotFoundException {
-    // TODO add implementation
-    throw new UnsupportedOperationException();
-  }
-
-  @Override
-  public ReplicationOperations replicationOperations() {
-    // TODO add implementation
-    throw new UnsupportedOperationException();
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockInstance.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockInstance.java
deleted file mode 100644
index 50d212f..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockInstance.java
+++ /dev/null
@@ -1,163 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Instance;
-import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode;
-import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.DefaultConfiguration;
-import org.apache.accumulo.core.util.ByteBufferUtil;
-import org.apache.accumulo.core.util.CachedConfiguration;
-import org.apache.accumulo.core.util.TextUtil;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.io.Text;
-
-/**
- * Mock Accumulo provides an in memory implementation of the Accumulo client API. It is possible that the behavior of this implementation may differ subtly from
- * the behavior of Accumulo. This could result in unit tests that pass on Mock Accumulo and fail on Accumulo or visa-versa. Documenting the differences would be
- * difficult and is not done.
- *
- * <p>
- * An alternative to Mock Accumulo called MiniAccumuloCluster was introduced in Accumulo 1.5. MiniAccumuloCluster spins up actual Accumulo server processes, can
- * be used for unit testing, and its behavior should match Accumulo. The drawback of MiniAccumuloCluster is that it starts more slowly than Mock Accumulo.
- *
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-public class MockInstance implements Instance {
-
-  static final String genericAddress = "localhost:1234";
-  static final Map<String,MockAccumulo> instances = new HashMap<>();
-  MockAccumulo acu;
-  String instanceName;
-
-  public MockInstance() {
-    acu = new MockAccumulo(getDefaultFileSystem());
-    instanceName = "mock-instance";
-  }
-
-  static FileSystem getDefaultFileSystem() {
-    try {
-      Configuration conf = CachedConfiguration.getInstance();
-      conf.set("fs.file.impl", "org.apache.hadoop.fs.LocalFileSystem");
-      conf.set("fs.default.name", "file:///");
-      return FileSystem.get(CachedConfiguration.getInstance());
-    } catch (IOException ex) {
-      throw new RuntimeException(ex);
-    }
-  }
-
-  public MockInstance(String instanceName) {
-    this(instanceName, getDefaultFileSystem());
-  }
-
-  public MockInstance(String instanceName, FileSystem fs) {
-    synchronized (instances) {
-      if (instances.containsKey(instanceName))
-        acu = instances.get(instanceName);
-      else
-        instances.put(instanceName, acu = new MockAccumulo(fs));
-    }
-    this.instanceName = instanceName;
-  }
-
-  @Override
-  public String getRootTabletLocation() {
-    return genericAddress;
-  }
-
-  @Override
-  public List<String> getMasterLocations() {
-    return Collections.singletonList(genericAddress);
-  }
-
-  @Override
-  public String getInstanceID() {
-    return "mock-instance-id";
-  }
-
-  @Override
-  public String getInstanceName() {
-    return instanceName;
-  }
-
-  @Override
-  public String getZooKeepers() {
-    return "localhost";
-  }
-
-  @Override
-  public int getZooKeepersSessionTimeOut() {
-    return 30 * 1000;
-  }
-
-  @Override
-  @Deprecated
-  public Connector getConnector(String user, byte[] pass) throws AccumuloException, AccumuloSecurityException {
-    return getConnector(user, new PasswordToken(pass));
-  }
-
-  @Override
-  @Deprecated
-  public Connector getConnector(String user, ByteBuffer pass) throws AccumuloException, AccumuloSecurityException {
-    return getConnector(user, ByteBufferUtil.toBytes(pass));
-  }
-
-  @Override
-  @Deprecated
-  public Connector getConnector(String user, CharSequence pass) throws AccumuloException, AccumuloSecurityException {
-    return getConnector(user, TextUtil.getBytes(new Text(pass.toString())));
-  }
-
-  AccumuloConfiguration conf = null;
-
-  @Deprecated
-  @Override
-  public AccumuloConfiguration getConfiguration() {
-    return conf == null ? DefaultConfiguration.getInstance() : conf;
-  }
-
-  @Override
-  @Deprecated
-  public void setConfiguration(AccumuloConfiguration conf) {
-    this.conf = conf;
-  }
-
-  @Override
-  public Connector getConnector(String principal, AuthenticationToken token) throws AccumuloException, AccumuloSecurityException {
-    Connector conn = new MockConnector(new Credentials(principal, token), acu, this);
-    if (!acu.users.containsKey(principal))
-      conn.securityOperations().createLocalUser(principal, (PasswordToken) token);
-    else if (!acu.users.get(principal).token.equals(token))
-      throw new AccumuloSecurityException(principal, SecurityErrorCode.BAD_CREDENTIALS);
-    return conn;
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockInstanceOperations.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockInstanceOperations.java
deleted file mode 100644
index e264104..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockInstanceOperations.java
+++ /dev/null
@@ -1,97 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.admin.ActiveCompaction;
-import org.apache.accumulo.core.client.admin.ActiveScan;
-import org.apache.accumulo.core.client.admin.InstanceOperations;
-import org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-class MockInstanceOperations implements InstanceOperations {
-  private static final Logger log = LoggerFactory.getLogger(MockInstanceOperations.class);
-  MockAccumulo acu;
-
-  public MockInstanceOperations(MockAccumulo acu) {
-    this.acu = acu;
-  }
-
-  @Override
-  public void setProperty(String property, String value) throws AccumuloException, AccumuloSecurityException {
-    acu.setProperty(property, value);
-  }
-
-  @Override
-  public void removeProperty(String property) throws AccumuloException, AccumuloSecurityException {
-    acu.removeProperty(property);
-  }
-
-  @Override
-  public Map<String,String> getSystemConfiguration() throws AccumuloException, AccumuloSecurityException {
-    return acu.systemProperties;
-  }
-
-  @Override
-  public Map<String,String> getSiteConfiguration() throws AccumuloException, AccumuloSecurityException {
-    return acu.systemProperties;
-  }
-
-  @Override
-  public List<String> getTabletServers() {
-    return new ArrayList<>();
-  }
-
-  @Override
-  public List<ActiveScan> getActiveScans(String tserver) throws AccumuloException, AccumuloSecurityException {
-    return new ArrayList<>();
-  }
-
-  @Override
-  public boolean testClassLoad(String className, String asTypeName) throws AccumuloException, AccumuloSecurityException {
-    try {
-      AccumuloVFSClassLoader.loadClass(className, Class.forName(asTypeName));
-    } catch (ClassNotFoundException e) {
-      log.warn("Could not find class named '" + className + "' in testClassLoad.", e);
-      return false;
-    }
-    return true;
-  }
-
-  @Override
-  public List<ActiveCompaction> getActiveCompactions(String tserver) throws AccumuloException, AccumuloSecurityException {
-    return new ArrayList<>();
-  }
-
-  @Override
-  public void ping(String tserver) throws AccumuloException {
-
-  }
-
-  @Override
-  public void waitForBalance() throws AccumuloException {}
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockMultiTableBatchWriter.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockMultiTableBatchWriter.java
deleted file mode 100644
index 5b9bc2b..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockMultiTableBatchWriter.java
+++ /dev/null
@@ -1,60 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.util.HashMap;
-import java.util.Map;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.MultiTableBatchWriter;
-import org.apache.accumulo.core.client.MutationsRejectedException;
-import org.apache.accumulo.core.client.TableNotFoundException;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-public class MockMultiTableBatchWriter implements MultiTableBatchWriter {
-  MockAccumulo acu = null;
-  Map<String,MockBatchWriter> bws = null;
-
-  public MockMultiTableBatchWriter(MockAccumulo acu) {
-    this.acu = acu;
-    bws = new HashMap<>();
-  }
-
-  @Override
-  public BatchWriter getBatchWriter(String table) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
-    if (!bws.containsKey(table)) {
-      bws.put(table, new MockBatchWriter(acu, table));
-    }
-    return bws.get(table);
-  }
-
-  @Override
-  public void flush() throws MutationsRejectedException {}
-
-  @Override
-  public void close() throws MutationsRejectedException {}
-
-  @Override
-  public boolean isClosed() {
-    throw new UnsupportedOperationException();
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockNamespace.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockNamespace.java
deleted file mode 100644
index 456580b..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockNamespace.java
+++ /dev/null
@@ -1,58 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.util.EnumSet;
-import java.util.HashMap;
-import java.util.LinkedList;
-import java.util.List;
-import java.util.Map;
-import java.util.Map.Entry;
-
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.security.NamespacePermission;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-public class MockNamespace {
-
-  final HashMap<String,String> settings;
-  Map<String,EnumSet<NamespacePermission>> userPermissions = new HashMap<>();
-
-  public MockNamespace() {
-    settings = new HashMap<>();
-    for (Entry<String,String> entry : AccumuloConfiguration.getDefaultConfiguration()) {
-      String key = entry.getKey();
-      if (key.startsWith(Property.TABLE_PREFIX.getKey())) {
-        settings.put(key, entry.getValue());
-      }
-    }
-  }
-
-  public List<String> getTables(MockAccumulo acu) {
-    List<String> l = new LinkedList<>();
-    for (String t : acu.tables.keySet()) {
-      if (acu.tables.get(t).getNamespace().equals(this)) {
-        l.add(t);
-      }
-    }
-    return l;
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockNamespaceOperations.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockNamespaceOperations.java
deleted file mode 100644
index b1cb980..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockNamespaceOperations.java
+++ /dev/null
@@ -1,138 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.util.HashMap;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.SortedSet;
-import java.util.TreeSet;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.NamespaceExistsException;
-import org.apache.accumulo.core.client.NamespaceNotEmptyException;
-import org.apache.accumulo.core.client.NamespaceNotFoundException;
-import org.apache.accumulo.core.client.impl.NamespaceOperationsHelper;
-import org.apache.accumulo.core.client.impl.Namespaces;
-import org.apache.accumulo.core.client.impl.Tables;
-import org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-class MockNamespaceOperations extends NamespaceOperationsHelper {
-
-  private static final Logger log = LoggerFactory.getLogger(MockNamespaceOperations.class);
-
-  final private MockAccumulo acu;
-  final private String username;
-
-  MockNamespaceOperations(MockAccumulo acu, String username) {
-    this.acu = acu;
-    this.username = username;
-  }
-
-  @Override
-  public SortedSet<String> list() {
-    return new TreeSet<>(acu.namespaces.keySet());
-  }
-
-  @Override
-  public boolean exists(String namespace) {
-    return acu.namespaces.containsKey(namespace);
-  }
-
-  @Override
-  public void create(String namespace) throws AccumuloException, AccumuloSecurityException, NamespaceExistsException {
-    if (!namespace.matches(Namespaces.VALID_NAME_REGEX))
-      throw new IllegalArgumentException();
-
-    if (exists(namespace))
-      throw new NamespaceExistsException(namespace, namespace, "");
-    else
-      acu.createNamespace(username, namespace);
-  }
-
-  @Override
-  public void delete(String namespace) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, NamespaceNotEmptyException {
-    if (acu.namespaces.get(namespace).getTables(acu).size() > 0) {
-      throw new NamespaceNotEmptyException(null, namespace, null);
-    }
-    acu.namespaces.remove(namespace);
-  }
-
-  @Override
-  public void rename(String oldNamespaceName, String newNamespaceName) throws AccumuloSecurityException, NamespaceNotFoundException, AccumuloException,
-      NamespaceExistsException {
-    if (!exists(oldNamespaceName))
-      throw new NamespaceNotFoundException(oldNamespaceName, oldNamespaceName, "");
-    if (exists(newNamespaceName))
-      throw new NamespaceExistsException(newNamespaceName, newNamespaceName, "");
-
-    MockNamespace n = acu.namespaces.get(oldNamespaceName);
-    for (String t : n.getTables(acu)) {
-      String tt = newNamespaceName + "." + Tables.qualify(t).getSecond();
-      acu.tables.put(tt, acu.tables.remove(t));
-    }
-    acu.namespaces.put(newNamespaceName, acu.namespaces.remove(oldNamespaceName));
-  }
-
-  @Override
-  public void setProperty(String namespace, String property, String value) throws AccumuloException, AccumuloSecurityException {
-    acu.namespaces.get(namespace).settings.put(property, value);
-  }
-
-  @Override
-  public void removeProperty(String namespace, String property) throws AccumuloException, AccumuloSecurityException {
-    acu.namespaces.get(namespace).settings.remove(property);
-  }
-
-  @Override
-  public Iterable<Entry<String,String>> getProperties(String namespace) throws NamespaceNotFoundException {
-    if (!exists(namespace)) {
-      throw new NamespaceNotFoundException(namespace, namespace, "");
-    }
-
-    return acu.namespaces.get(namespace).settings.entrySet();
-  }
-
-  @Override
-  public Map<String,String> namespaceIdMap() {
-    Map<String,String> result = new HashMap<>();
-    for (String table : acu.tables.keySet()) {
-      result.put(table, table);
-    }
-    return result;
-  }
-
-  @Override
-  public boolean testClassLoad(String namespace, String className, String asTypeName) throws AccumuloException, AccumuloSecurityException,
-      NamespaceNotFoundException {
-
-    try {
-      AccumuloVFSClassLoader.loadClass(className, Class.forName(asTypeName));
-    } catch (ClassNotFoundException e) {
-      log.warn("Could not load class '" + className + "' with type name '" + asTypeName + "' in testClassLoad()", e);
-      return false;
-    }
-    return true;
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockScanner.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockScanner.java
deleted file mode 100644
index 1e36964..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockScanner.java
+++ /dev/null
@@ -1,127 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.io.IOException;
-import java.util.Iterator;
-import java.util.Map.Entry;
-import java.util.concurrent.TimeUnit;
-
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.Filter;
-import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
-import org.apache.accumulo.core.iterators.SortedMapIterator;
-import org.apache.accumulo.core.security.Authorizations;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-public class MockScanner extends MockScannerBase implements Scanner {
-
-  int batchSize = 0;
-  Range range = new Range();
-
-  MockScanner(MockTable table, Authorizations auths) {
-    super(table, auths);
-  }
-
-  @Deprecated
-  @Override
-  public void setTimeOut(int timeOut) {
-    if (timeOut == Integer.MAX_VALUE)
-      setTimeout(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
-    else
-      setTimeout(timeOut, TimeUnit.SECONDS);
-  }
-
-  @Deprecated
-  @Override
-  public int getTimeOut() {
-    long timeout = getTimeout(TimeUnit.SECONDS);
-    if (timeout >= Integer.MAX_VALUE)
-      return Integer.MAX_VALUE;
-    return (int) timeout;
-  }
-
-  @Override
-  public void setRange(Range range) {
-    this.range = range;
-  }
-
-  @Override
-  public Range getRange() {
-    return this.range;
-  }
-
-  @Override
-  public void setBatchSize(int size) {
-    this.batchSize = size;
-  }
-
-  @Override
-  public int getBatchSize() {
-    return this.batchSize;
-  }
-
-  @Override
-  public void enableIsolation() {}
-
-  @Override
-  public void disableIsolation() {}
-
-  static class RangeFilter extends Filter {
-    Range range;
-
-    RangeFilter(SortedKeyValueIterator<Key,Value> i, Range range) {
-      setSource(i);
-      this.range = range;
-    }
-
-    @Override
-    public boolean accept(Key k, Value v) {
-      return range.contains(k);
-    }
-  }
-
-  @Override
-  public Iterator<Entry<Key,Value>> iterator() {
-    SortedKeyValueIterator<Key,Value> i = new SortedMapIterator(table.table);
-    try {
-      i = new RangeFilter(createFilter(i), range);
-      i.seek(range, createColumnBSS(fetchedColumns), !fetchedColumns.isEmpty());
-      return new IteratorAdapter(i);
-    } catch (IOException e) {
-      throw new RuntimeException(e);
-    }
-
-  }
-
-  @Override
-  public long getReadaheadThreshold() {
-    return 0;
-  }
-
-  @Override
-  public void setReadaheadThreshold(long batches) {
-
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockScannerBase.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockScannerBase.java
deleted file mode 100644
index ad79ec0..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockScannerBase.java
+++ /dev/null
@@ -1,158 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.Map.Entry;
-
-import org.apache.accumulo.core.client.impl.ScannerOptions;
-import org.apache.accumulo.core.client.sample.SamplerConfiguration;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.data.ArrayByteSequence;
-import org.apache.accumulo.core.data.ByteSequence;
-import org.apache.accumulo.core.data.Column;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.IteratorEnvironment;
-import org.apache.accumulo.core.iterators.IteratorUtil;
-import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
-import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
-import org.apache.accumulo.core.iterators.system.ColumnFamilySkippingIterator;
-import org.apache.accumulo.core.iterators.system.ColumnQualifierFilter;
-import org.apache.accumulo.core.iterators.system.DeletingIterator;
-import org.apache.accumulo.core.iterators.system.MultiIterator;
-import org.apache.accumulo.core.iterators.system.VisibilityFilter;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.commons.lang.NotImplementedException;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-public class MockScannerBase extends ScannerOptions {
-
-  protected final MockTable table;
-  protected final Authorizations auths;
-
-  MockScannerBase(MockTable mockTable, Authorizations authorizations) {
-    this.table = mockTable;
-    this.auths = authorizations;
-  }
-
-  static HashSet<ByteSequence> createColumnBSS(Collection<Column> columns) {
-    HashSet<ByteSequence> columnSet = new HashSet<>();
-    for (Column c : columns) {
-      columnSet.add(new ArrayByteSequence(c.getColumnFamily()));
-    }
-    return columnSet;
-  }
-
-  static class MockIteratorEnvironment implements IteratorEnvironment {
-
-    private final Authorizations auths;
-
-    MockIteratorEnvironment(Authorizations auths) {
-      this.auths = auths;
-    }
-
-    @Override
-    public SortedKeyValueIterator<Key,Value> reserveMapFileReader(String mapFileName) throws IOException {
-      throw new NotImplementedException();
-    }
-
-    @Override
-    public AccumuloConfiguration getConfig() {
-      return AccumuloConfiguration.getDefaultConfiguration();
-    }
-
-    @Override
-    public IteratorScope getIteratorScope() {
-      return IteratorScope.scan;
-    }
-
-    @Override
-    public boolean isFullMajorCompaction() {
-      return false;
-    }
-
-    private ArrayList<SortedKeyValueIterator<Key,Value>> topLevelIterators = new ArrayList<>();
-
-    @Override
-    public void registerSideChannel(SortedKeyValueIterator<Key,Value> iter) {
-      topLevelIterators.add(iter);
-    }
-
-    @Override
-    public Authorizations getAuthorizations() {
-      return auths;
-    }
-
-    SortedKeyValueIterator<Key,Value> getTopLevelIterator(SortedKeyValueIterator<Key,Value> iter) {
-      if (topLevelIterators.isEmpty())
-        return iter;
-      ArrayList<SortedKeyValueIterator<Key,Value>> allIters = new ArrayList<>(topLevelIterators);
-      allIters.add(iter);
-      return new MultiIterator(allIters, false);
-    }
-
-    @Override
-    public boolean isSamplingEnabled() {
-      throw new UnsupportedOperationException();
-    }
-
-    @Override
-    public SamplerConfiguration getSamplerConfiguration() {
-      throw new UnsupportedOperationException();
-    }
-
-    @Override
-    public IteratorEnvironment cloneWithSamplingEnabled() {
-      throw new UnsupportedOperationException();
-    }
-  }
-
-  public SortedKeyValueIterator<Key,Value> createFilter(SortedKeyValueIterator<Key,Value> inner) throws IOException {
-    byte[] defaultLabels = {};
-    inner = new ColumnFamilySkippingIterator(new DeletingIterator(inner, false));
-    ColumnQualifierFilter cqf = new ColumnQualifierFilter(inner, new HashSet<>(fetchedColumns));
-    VisibilityFilter vf = new VisibilityFilter(cqf, auths, defaultLabels);
-    AccumuloConfiguration conf = new MockConfiguration(table.settings);
-    MockIteratorEnvironment iterEnv = new MockIteratorEnvironment(auths);
-    SortedKeyValueIterator<Key,Value> result = iterEnv.getTopLevelIterator(IteratorUtil.loadIterators(IteratorScope.scan, vf, null, conf,
-        serverSideIteratorList, serverSideIteratorOptions, iterEnv, false));
-    return result;
-  }
-
-  @Override
-  public Iterator<Entry<Key,Value>> iterator() {
-    throw new UnsupportedOperationException();
-  }
-
-  @Override
-  public Authorizations getAuthorizations() {
-    return auths;
-  }
-
-  @Override
-  public void setClassLoaderContext(String context) {
-    throw new UnsupportedOperationException();
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockSecurityOperations.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockSecurityOperations.java
deleted file mode 100644
index bf4b46e..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockSecurityOperations.java
+++ /dev/null
@@ -1,236 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.util.EnumSet;
-import java.util.Set;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.admin.DelegationTokenConfig;
-import org.apache.accumulo.core.client.admin.SecurityOperations;
-import org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode;
-import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
-import org.apache.accumulo.core.client.security.tokens.DelegationToken;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.security.NamespacePermission;
-import org.apache.accumulo.core.security.SystemPermission;
-import org.apache.accumulo.core.security.TablePermission;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-class MockSecurityOperations implements SecurityOperations {
-
-  final private MockAccumulo acu;
-
-  MockSecurityOperations(MockAccumulo acu) {
-    this.acu = acu;
-  }
-
-  @Deprecated
-  @Override
-  public void createUser(String user, byte[] password, Authorizations authorizations) throws AccumuloException, AccumuloSecurityException {
-    createLocalUser(user, new PasswordToken(password));
-    changeUserAuthorizations(user, authorizations);
-  }
-
-  @Override
-  public void createLocalUser(String principal, PasswordToken password) throws AccumuloException, AccumuloSecurityException {
-    this.acu.users.put(principal, new MockUser(principal, password, new Authorizations()));
-  }
-
-  @Deprecated
-  @Override
-  public void dropUser(String user) throws AccumuloException, AccumuloSecurityException {
-    dropLocalUser(user);
-  }
-
-  @Override
-  public void dropLocalUser(String principal) throws AccumuloException, AccumuloSecurityException {
-    this.acu.users.remove(principal);
-  }
-
-  @Deprecated
-  @Override
-  public boolean authenticateUser(String user, byte[] password) throws AccumuloException, AccumuloSecurityException {
-    return authenticateUser(user, new PasswordToken(password));
-  }
-
-  @Override
-  public boolean authenticateUser(String principal, AuthenticationToken token) throws AccumuloException, AccumuloSecurityException {
-    MockUser user = acu.users.get(principal);
-    if (user == null)
-      return false;
-    return user.token.equals(token);
-  }
-
-  @Deprecated
-  @Override
-  public void changeUserPassword(String user, byte[] password) throws AccumuloException, AccumuloSecurityException {
-    changeLocalUserPassword(user, new PasswordToken(password));
-  }
-
-  @Override
-  public void changeLocalUserPassword(String principal, PasswordToken token) throws AccumuloException, AccumuloSecurityException {
-    MockUser user = acu.users.get(principal);
-    if (user != null)
-      user.token = token.clone();
-    else
-      throw new AccumuloSecurityException(principal, SecurityErrorCode.USER_DOESNT_EXIST);
-  }
-
-  @Override
-  public void changeUserAuthorizations(String principal, Authorizations authorizations) throws AccumuloException, AccumuloSecurityException {
-    MockUser user = acu.users.get(principal);
-    if (user != null)
-      user.authorizations = authorizations;
-    else
-      throw new AccumuloSecurityException(principal, SecurityErrorCode.USER_DOESNT_EXIST);
-  }
-
-  @Override
-  public Authorizations getUserAuthorizations(String principal) throws AccumuloException, AccumuloSecurityException {
-    MockUser user = acu.users.get(principal);
-    if (user != null)
-      return user.authorizations;
-    else
-      throw new AccumuloSecurityException(principal, SecurityErrorCode.USER_DOESNT_EXIST);
-  }
-
-  @Override
-  public boolean hasSystemPermission(String principal, SystemPermission perm) throws AccumuloException, AccumuloSecurityException {
-    MockUser user = acu.users.get(principal);
-    if (user != null)
-      return user.permissions.contains(perm);
-    else
-      throw new AccumuloSecurityException(principal, SecurityErrorCode.USER_DOESNT_EXIST);
-  }
-
-  @Override
-  public boolean hasTablePermission(String principal, String tableName, TablePermission perm) throws AccumuloException, AccumuloSecurityException {
-    MockTable table = acu.tables.get(tableName);
-    if (table == null)
-      throw new AccumuloSecurityException(tableName, SecurityErrorCode.TABLE_DOESNT_EXIST);
-    EnumSet<TablePermission> perms = table.userPermissions.get(principal);
-    if (perms == null)
-      return false;
-    return perms.contains(perm);
-  }
-
-  @Override
-  public boolean hasNamespacePermission(String principal, String namespace, NamespacePermission permission) throws AccumuloException, AccumuloSecurityException {
-    MockNamespace mockNamespace = acu.namespaces.get(namespace);
-    if (mockNamespace == null)
-      throw new AccumuloSecurityException(namespace, SecurityErrorCode.NAMESPACE_DOESNT_EXIST);
-    EnumSet<NamespacePermission> perms = mockNamespace.userPermissions.get(principal);
-    if (perms == null)
-      return false;
-    return perms.contains(permission);
-  }
-
-  @Override
-  public void grantSystemPermission(String principal, SystemPermission permission) throws AccumuloException, AccumuloSecurityException {
-    MockUser user = acu.users.get(principal);
-    if (user != null)
-      user.permissions.add(permission);
-    else
-      throw new AccumuloSecurityException(principal, SecurityErrorCode.USER_DOESNT_EXIST);
-  }
-
-  @Override
-  public void grantTablePermission(String principal, String tableName, TablePermission permission) throws AccumuloException, AccumuloSecurityException {
-    if (acu.users.get(principal) == null)
-      throw new AccumuloSecurityException(principal, SecurityErrorCode.USER_DOESNT_EXIST);
-    MockTable table = acu.tables.get(tableName);
-    if (table == null)
-      throw new AccumuloSecurityException(tableName, SecurityErrorCode.TABLE_DOESNT_EXIST);
-    EnumSet<TablePermission> perms = table.userPermissions.get(principal);
-    if (perms == null)
-      table.userPermissions.put(principal, EnumSet.of(permission));
-    else
-      perms.add(permission);
-  }
-
-  @Override
-  public void grantNamespacePermission(String principal, String namespace, NamespacePermission permission) throws AccumuloException, AccumuloSecurityException {
-    if (acu.users.get(principal) == null)
-      throw new AccumuloSecurityException(principal, SecurityErrorCode.USER_DOESNT_EXIST);
-    MockNamespace mockNamespace = acu.namespaces.get(namespace);
-    if (mockNamespace == null)
-      throw new AccumuloSecurityException(namespace, SecurityErrorCode.NAMESPACE_DOESNT_EXIST);
-    EnumSet<NamespacePermission> perms = mockNamespace.userPermissions.get(principal);
-    if (perms == null)
-      mockNamespace.userPermissions.put(principal, EnumSet.of(permission));
-    else
-      perms.add(permission);
-  }
-
-  @Override
-  public void revokeSystemPermission(String principal, SystemPermission permission) throws AccumuloException, AccumuloSecurityException {
-    MockUser user = acu.users.get(principal);
-    if (user != null)
-      user.permissions.remove(permission);
-    else
-      throw new AccumuloSecurityException(principal, SecurityErrorCode.USER_DOESNT_EXIST);
-  }
-
-  @Override
-  public void revokeTablePermission(String principal, String tableName, TablePermission permission) throws AccumuloException, AccumuloSecurityException {
-    if (acu.users.get(principal) == null)
-      throw new AccumuloSecurityException(principal, SecurityErrorCode.USER_DOESNT_EXIST);
-    MockTable table = acu.tables.get(tableName);
-    if (table == null)
-      throw new AccumuloSecurityException(tableName, SecurityErrorCode.TABLE_DOESNT_EXIST);
-    EnumSet<TablePermission> perms = table.userPermissions.get(principal);
-    if (perms != null)
-      perms.remove(permission);
-
-  }
-
-  @Override
-  public void revokeNamespacePermission(String principal, String namespace, NamespacePermission permission) throws AccumuloException, AccumuloSecurityException {
-    if (acu.users.get(principal) == null)
-      throw new AccumuloSecurityException(principal, SecurityErrorCode.USER_DOESNT_EXIST);
-    MockNamespace mockNamespace = acu.namespaces.get(namespace);
-    if (mockNamespace == null)
-      throw new AccumuloSecurityException(namespace, SecurityErrorCode.NAMESPACE_DOESNT_EXIST);
-    EnumSet<NamespacePermission> perms = mockNamespace.userPermissions.get(principal);
-    if (perms != null)
-      perms.remove(permission);
-
-  }
-
-  @Deprecated
-  @Override
-  public Set<String> listUsers() throws AccumuloException, AccumuloSecurityException {
-    return listLocalUsers();
-  }
-
-  @Override
-  public Set<String> listLocalUsers() throws AccumuloException, AccumuloSecurityException {
-    return acu.users.keySet();
-  }
-
-  @Override
-  public DelegationToken getDelegationToken(DelegationTokenConfig cfg) throws AccumuloException, AccumuloSecurityException {
-    return null;
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockTable.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockTable.java
deleted file mode 100644
index 1445650..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockTable.java
+++ /dev/null
@@ -1,212 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.util.Collection;
-import java.util.EnumSet;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.Set;
-import java.util.SortedMap;
-import java.util.SortedSet;
-import java.util.TreeMap;
-import java.util.concurrent.ConcurrentSkipListMap;
-import java.util.concurrent.ConcurrentSkipListSet;
-
-import org.apache.accumulo.core.client.admin.TimeType;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.data.ColumnUpdate;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.IteratorUtil;
-import org.apache.accumulo.core.security.TablePermission;
-import org.apache.hadoop.io.Text;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-public class MockTable {
-
-  static class MockMemKey extends Key {
-    private int count;
-
-    MockMemKey(Key key, int count) {
-      super(key);
-      this.count = count;
-    }
-
-    @Override
-    public int hashCode() {
-      return super.hashCode() + count;
-    }
-
-    @Override
-    public boolean equals(Object other) {
-      return (other instanceof MockMemKey) && super.equals(other) && count == ((MockMemKey) other).count;
-    }
-
-    @Override
-    public String toString() {
-      return super.toString() + " count=" + count;
-    }
-
-    @Override
-    public int compareTo(Key o) {
-      int compare = super.compareTo(o);
-      if (compare != 0)
-        return compare;
-      if (o instanceof MockMemKey) {
-        MockMemKey other = (MockMemKey) o;
-        if (count < other.count)
-          return 1;
-        if (count > other.count)
-          return -1;
-      } else {
-        return 1;
-      }
-      return 0;
-    }
-  }
-
-  final SortedMap<Key,Value> table = new ConcurrentSkipListMap<>();
-  int mutationCount = 0;
-  final Map<String,String> settings;
-  Map<String,EnumSet<TablePermission>> userPermissions = new HashMap<>();
-  private TimeType timeType;
-  SortedSet<Text> splits = new ConcurrentSkipListSet<>();
-  Map<String,Set<Text>> localityGroups = new TreeMap<>();
-  private MockNamespace namespace;
-  private String namespaceName;
-  private String tableId;
-
-  MockTable(boolean limitVersion, TimeType timeType, String tableId) {
-    this.timeType = timeType;
-    this.tableId = tableId;
-    settings = IteratorUtil.generateInitialTableProperties(limitVersion);
-    for (Entry<String,String> entry : AccumuloConfiguration.getDefaultConfiguration()) {
-      String key = entry.getKey();
-      if (key.startsWith(Property.TABLE_PREFIX.getKey()))
-        settings.put(key, entry.getValue());
-    }
-  }
-
-  MockTable(MockNamespace namespace, boolean limitVersion, TimeType timeType, String tableId, Map<String,String> properties) {
-    this(limitVersion, timeType, tableId);
-    Set<Entry<String,String>> set = namespace.settings.entrySet();
-    Iterator<Entry<String,String>> entries = set.iterator();
-    while (entries.hasNext()) {
-      Entry<String,String> entry = entries.next();
-      String key = entry.getKey();
-      if (key.startsWith(Property.TABLE_PREFIX.getKey()))
-        settings.put(key, entry.getValue());
-    }
-
-    for (Entry<String,String> initialProp : properties.entrySet()) {
-      settings.put(initialProp.getKey(), initialProp.getValue());
-    }
-  }
-
-  public MockTable(MockNamespace namespace, TimeType timeType, String tableId, Map<String,String> properties) {
-    this.timeType = timeType;
-    this.tableId = tableId;
-    settings = properties;
-    for (Entry<String,String> entry : AccumuloConfiguration.getDefaultConfiguration()) {
-      String key = entry.getKey();
-      if (key.startsWith(Property.TABLE_PREFIX.getKey()))
-        settings.put(key, entry.getValue());
-    }
-
-    Set<Entry<String,String>> set = namespace.settings.entrySet();
-    Iterator<Entry<String,String>> entries = set.iterator();
-    while (entries.hasNext()) {
-      Entry<String,String> entry = entries.next();
-      String key = entry.getKey();
-      if (key.startsWith(Property.TABLE_PREFIX.getKey()))
-        settings.put(key, entry.getValue());
-    }
-  }
-
-  synchronized void addMutation(Mutation m) {
-    if (m.size() == 0)
-      throw new IllegalArgumentException("Can not add empty mutations");
-    long now = System.currentTimeMillis();
-    mutationCount++;
-    for (ColumnUpdate u : m.getUpdates()) {
-      Key key = new Key(m.getRow(), 0, m.getRow().length, u.getColumnFamily(), 0, u.getColumnFamily().length, u.getColumnQualifier(), 0,
-          u.getColumnQualifier().length, u.getColumnVisibility(), 0, u.getColumnVisibility().length, u.getTimestamp());
-      if (u.isDeleted())
-        key.setDeleted(true);
-      if (!u.hasTimestamp())
-        if (timeType.equals(TimeType.LOGICAL))
-          key.setTimestamp(mutationCount);
-        else
-          key.setTimestamp(now);
-
-      table.put(new MockMemKey(key, mutationCount), new Value(u.getValue()));
-    }
-  }
-
-  public void addSplits(SortedSet<Text> partitionKeys) {
-    splits.addAll(partitionKeys);
-  }
-
-  public Collection<Text> getSplits() {
-    return splits;
-  }
-
-  public void setLocalityGroups(Map<String,Set<Text>> groups) {
-    localityGroups = groups;
-  }
-
-  public Map<String,Set<Text>> getLocalityGroups() {
-    return localityGroups;
-  }
-
-  public void merge(Text start, Text end) {
-    boolean reAdd = false;
-    if (splits.contains(start))
-      reAdd = true;
-    splits.removeAll(splits.subSet(start, end));
-    if (reAdd)
-      splits.add(start);
-  }
-
-  public void setNamespaceName(String n) {
-    this.namespaceName = n;
-  }
-
-  public void setNamespace(MockNamespace n) {
-    this.namespace = n;
-  }
-
-  public String getNamespaceName() {
-    return this.namespaceName;
-  }
-
-  public MockNamespace getNamespace() {
-    return this.namespace;
-  }
-
-  public String getTableId() {
-    return this.tableId;
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperations.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperations.java
deleted file mode 100644
index de89137..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperations.java
+++ /dev/null
@@ -1,505 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.io.DataInputStream;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.Set;
-import java.util.SortedSet;
-import java.util.TreeSet;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.NamespaceNotFoundException;
-import org.apache.accumulo.core.client.TableExistsException;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.admin.CompactionConfig;
-import org.apache.accumulo.core.client.admin.DiskUsage;
-import org.apache.accumulo.core.client.admin.FindMax;
-import org.apache.accumulo.core.client.admin.Locations;
-import org.apache.accumulo.core.client.admin.NewTableConfiguration;
-import org.apache.accumulo.core.client.admin.TimeType;
-import org.apache.accumulo.core.client.impl.TableOperationsHelper;
-import org.apache.accumulo.core.client.impl.Tables;
-import org.apache.accumulo.core.client.sample.SamplerConfiguration;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.file.FileOperations;
-import org.apache.accumulo.core.file.FileSKVIterator;
-import org.apache.accumulo.core.metadata.MetadataTable;
-import org.apache.accumulo.core.metadata.RootTable;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.security.ColumnVisibility;
-import org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader;
-import org.apache.commons.lang.NotImplementedException;
-import org.apache.hadoop.fs.FSDataOutputStream;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import static com.google.common.base.Preconditions.checkArgument;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-class MockTableOperations extends TableOperationsHelper {
-  private static final Logger log = LoggerFactory.getLogger(MockTableOperations.class);
-  private static final byte[] ZERO = {0};
-  private final MockAccumulo acu;
-  private final String username;
-
-  MockTableOperations(MockAccumulo acu, String username) {
-    this.acu = acu;
-    this.username = username;
-  }
-
-  @Override
-  public SortedSet<String> list() {
-    return new TreeSet<>(acu.tables.keySet());
-  }
-
-  @Override
-  public boolean exists(String tableName) {
-    return acu.tables.containsKey(tableName);
-  }
-
-  private boolean namespaceExists(String namespace) {
-    return acu.namespaces.containsKey(namespace);
-  }
-
-  @Override
-  public void create(String tableName) throws AccumuloException, AccumuloSecurityException, TableExistsException {
-    create(tableName, new NewTableConfiguration());
-  }
-
-  @Override
-  @Deprecated
-  public void create(String tableName, boolean versioningIter) throws AccumuloException, AccumuloSecurityException, TableExistsException {
-    create(tableName, versioningIter, TimeType.MILLIS);
-  }
-
-  @Override
-  @Deprecated
-  public void create(String tableName, boolean versioningIter, TimeType timeType) throws AccumuloException, AccumuloSecurityException, TableExistsException {
-    NewTableConfiguration ntc = new NewTableConfiguration().setTimeType(timeType);
-
-    if (versioningIter)
-      create(tableName, ntc);
-    else
-      create(tableName, ntc.withoutDefaultIterators());
-  }
-
-  @Override
-  public void create(String tableName, NewTableConfiguration ntc) throws AccumuloException, AccumuloSecurityException, TableExistsException {
-    String namespace = Tables.qualify(tableName).getFirst();
-
-    checkArgument(tableName.matches(Tables.VALID_NAME_REGEX));
-    if (exists(tableName))
-      throw new TableExistsException(tableName, tableName, "");
-    checkArgument(namespaceExists(namespace), "Namespace (" + namespace + ") does not exist, create it first");
-    acu.createTable(username, tableName, ntc.getTimeType(), ntc.getProperties());
-  }
-
-  @Override
-  public void addSplits(String tableName, SortedSet<Text> partitionKeys) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-    acu.addSplits(tableName, partitionKeys);
-  }
-
-  @Deprecated
-  @Override
-  public Collection<Text> getSplits(String tableName) throws TableNotFoundException {
-    return listSplits(tableName);
-  }
-
-  @Deprecated
-  @Override
-  public Collection<Text> getSplits(String tableName, int maxSplits) throws TableNotFoundException {
-    return listSplits(tableName);
-  }
-
-  @Override
-  public Collection<Text> listSplits(String tableName) throws TableNotFoundException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-    return acu.getSplits(tableName);
-  }
-
-  @Override
-  public Collection<Text> listSplits(String tableName, int maxSplits) throws TableNotFoundException {
-    return listSplits(tableName);
-  }
-
-  @Override
-  public void delete(String tableName) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-    acu.tables.remove(tableName);
-  }
-
-  @Override
-  public void rename(String oldTableName, String newTableName) throws AccumuloSecurityException, TableNotFoundException, AccumuloException,
-      TableExistsException {
-    if (!exists(oldTableName))
-      throw new TableNotFoundException(oldTableName, oldTableName, "");
-    if (exists(newTableName))
-      throw new TableExistsException(newTableName, newTableName, "");
-    MockTable t = acu.tables.remove(oldTableName);
-    String namespace = Tables.qualify(newTableName).getFirst();
-    MockNamespace n = acu.namespaces.get(namespace);
-    if (n == null) {
-      n = new MockNamespace();
-    }
-    t.setNamespaceName(namespace);
-    t.setNamespace(n);
-    acu.namespaces.put(namespace, n);
-    acu.tables.put(newTableName, t);
-  }
-
-  @Deprecated
-  @Override
-  public void flush(String tableName) throws AccumuloException, AccumuloSecurityException {}
-
-  @Override
-  public void setProperty(String tableName, String property, String value) throws AccumuloException, AccumuloSecurityException {
-    acu.tables.get(tableName).settings.put(property, value);
-  }
-
-  @Override
-  public void removeProperty(String tableName, String property) throws AccumuloException, AccumuloSecurityException {
-    acu.tables.get(tableName).settings.remove(property);
-  }
-
-  @Override
-  public Iterable<Entry<String,String>> getProperties(String tableName) throws TableNotFoundException {
-    String namespace = Tables.qualify(tableName).getFirst();
-    if (!exists(tableName)) {
-      if (!namespaceExists(namespace))
-        throw new TableNotFoundException(tableName, new NamespaceNotFoundException(null, namespace, null));
-      throw new TableNotFoundException(null, tableName, null);
-    }
-
-    Set<Entry<String,String>> props = new HashSet<>(acu.namespaces.get(namespace).settings.entrySet());
-
-    Set<Entry<String,String>> tableProps = acu.tables.get(tableName).settings.entrySet();
-    for (Entry<String,String> e : tableProps) {
-      if (props.contains(e)) {
-        props.remove(e);
-      }
-      props.add(e);
-    }
-    return props;
-  }
-
-  @Override
-  public void setLocalityGroups(String tableName, Map<String,Set<Text>> groups) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-    acu.tables.get(tableName).setLocalityGroups(groups);
-  }
-
-  @Override
-  public Map<String,Set<Text>> getLocalityGroups(String tableName) throws AccumuloException, TableNotFoundException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-    return acu.tables.get(tableName).getLocalityGroups();
-  }
-
-  @Override
-  public Set<Range> splitRangeByTablets(String tableName, Range range, int maxSplits) throws AccumuloException, AccumuloSecurityException,
-      TableNotFoundException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-    return Collections.singleton(range);
-  }
-
-  @Override
-  public void importDirectory(String tableName, String dir, String failureDir, boolean setTime) throws IOException, AccumuloException,
-      AccumuloSecurityException, TableNotFoundException {
-    long time = System.currentTimeMillis();
-    MockTable table = acu.tables.get(tableName);
-    if (table == null) {
-      throw new TableNotFoundException(null, tableName, "The table was not found");
-    }
-    Path importPath = new Path(dir);
-    Path failurePath = new Path(failureDir);
-
-    FileSystem fs = acu.getFileSystem();
-    /*
-     * check preconditions
-     */
-    // directories are directories
-    if (fs.isFile(importPath)) {
-      throw new IOException("Import path must be a directory.");
-    }
-    if (fs.isFile(failurePath)) {
-      throw new IOException("Failure path must be a directory.");
-    }
-    // failures are writable
-    Path createPath = failurePath.suffix("/.createFile");
-    FSDataOutputStream createStream = null;
-    try {
-      createStream = fs.create(createPath);
-    } catch (IOException e) {
-      throw new IOException("Error path is not writable.");
-    } finally {
-      if (createStream != null) {
-        createStream.close();
-      }
-    }
-    fs.delete(createPath, false);
-    // failures are empty
-    FileStatus[] failureChildStats = fs.listStatus(failurePath);
-    if (failureChildStats.length > 0) {
-      throw new IOException("Error path must be empty.");
-    }
-    /*
-     * Begin the import - iterate the files in the path
-     */
-    for (FileStatus importStatus : fs.listStatus(importPath)) {
-      try {
-        FileSKVIterator importIterator = FileOperations.getInstance().newReaderBuilder().forFile(importStatus.getPath().toString(), fs, fs.getConf())
-            .withTableConfiguration(AccumuloConfiguration.getDefaultConfiguration()).seekToBeginning().build();
-        while (importIterator.hasTop()) {
-          Key key = importIterator.getTopKey();
-          Value value = importIterator.getTopValue();
-          if (setTime) {
-            key.setTimestamp(time);
-          }
-          Mutation mutation = new Mutation(key.getRow());
-          if (!key.isDeleted()) {
-            mutation.put(key.getColumnFamily(), key.getColumnQualifier(), new ColumnVisibility(key.getColumnVisibilityData().toArray()), key.getTimestamp(),
-                value);
-          } else {
-            mutation.putDelete(key.getColumnFamily(), key.getColumnQualifier(), new ColumnVisibility(key.getColumnVisibilityData().toArray()),
-                key.getTimestamp());
-          }
-          table.addMutation(mutation);
-          importIterator.next();
-        }
-      } catch (Exception e) {
-        FSDataOutputStream failureWriter = null;
-        DataInputStream failureReader = null;
-        try {
-          failureWriter = fs.create(failurePath.suffix("/" + importStatus.getPath().getName()));
-          failureReader = fs.open(importStatus.getPath());
-          int read = 0;
-          byte[] buffer = new byte[1024];
-          while (-1 != (read = failureReader.read(buffer))) {
-            failureWriter.write(buffer, 0, read);
-          }
-        } finally {
-          if (failureReader != null)
-            failureReader.close();
-          if (failureWriter != null)
-            failureWriter.close();
-        }
-      }
-      fs.delete(importStatus.getPath(), true);
-    }
-  }
-
-  @Override
-  public void offline(String tableName) throws AccumuloSecurityException, AccumuloException, TableNotFoundException {
-    offline(tableName, false);
-  }
-
-  @Override
-  public void offline(String tableName, boolean wait) throws AccumuloSecurityException, AccumuloException, TableNotFoundException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-  }
-
-  @Override
-  public void online(String tableName) throws AccumuloSecurityException, AccumuloException, TableNotFoundException {
-    online(tableName, false);
-  }
-
-  @Override
-  public void online(String tableName, boolean wait) throws AccumuloSecurityException, AccumuloException, TableNotFoundException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-  }
-
-  @Override
-  public void clearLocatorCache(String tableName) throws TableNotFoundException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-  }
-
-  @Override
-  public Map<String,String> tableIdMap() {
-    Map<String,String> result = new HashMap<>();
-    for (Entry<String,MockTable> entry : acu.tables.entrySet()) {
-      String table = entry.getKey();
-      if (RootTable.NAME.equals(table))
-        result.put(table, RootTable.ID);
-      else if (MetadataTable.NAME.equals(table))
-        result.put(table, MetadataTable.ID);
-      else
-        result.put(table, entry.getValue().getTableId());
-    }
-    return result;
-  }
-
-  @Override
-  public List<DiskUsage> getDiskUsage(Set<String> tables) throws AccumuloException, AccumuloSecurityException {
-
-    List<DiskUsage> diskUsages = new ArrayList<>();
-    diskUsages.add(new DiskUsage(new TreeSet<>(tables), 0l));
-
-    return diskUsages;
-  }
-
-  @Override
-  public void merge(String tableName, Text start, Text end) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-    acu.merge(tableName, start, end);
-  }
-
-  @Override
-  public void deleteRows(String tableName, Text start, Text end) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-    MockTable t = acu.tables.get(tableName);
-    Text startText = start != null ? new Text(start) : new Text();
-    if (startText.getLength() == 0 && end == null) {
-      t.table.clear();
-      return;
-    }
-    Text endText = end != null ? new Text(end) : new Text(t.table.lastKey().getRow().getBytes());
-    startText.append(ZERO, 0, 1);
-    endText.append(ZERO, 0, 1);
-    Set<Key> keep = new TreeSet<>(t.table.subMap(new Key(startText), new Key(endText)).keySet());
-    t.table.keySet().removeAll(keep);
-  }
-
-  @Override
-  public void compact(String tableName, Text start, Text end, boolean flush, boolean wait) throws AccumuloSecurityException, TableNotFoundException,
-      AccumuloException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-  }
-
-  @Override
-  public void compact(String tableName, Text start, Text end, List<IteratorSetting> iterators, boolean flush, boolean wait) throws AccumuloSecurityException,
-      TableNotFoundException, AccumuloException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-
-    if (iterators != null && iterators.size() > 0)
-      throw new UnsupportedOperationException();
-  }
-
-  @Override
-  public void compact(String tableName, CompactionConfig config) throws AccumuloSecurityException, TableNotFoundException, AccumuloException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-
-    if (config.getIterators().size() > 0 || config.getCompactionStrategy() != null)
-      throw new UnsupportedOperationException("Mock does not support iterators or compaction strategies for compactions");
-  }
-
-  @Override
-  public void cancelCompaction(String tableName) throws AccumuloSecurityException, TableNotFoundException, AccumuloException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-  }
-
-  @Override
-  public void clone(String srcTableName, String newTableName, boolean flush, Map<String,String> propertiesToSet, Set<String> propertiesToExclude)
-      throws AccumuloException, AccumuloSecurityException, TableNotFoundException, TableExistsException {
-    throw new NotImplementedException();
-  }
-
-  @Override
-  public void flush(String tableName, Text start, Text end, boolean wait) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
-    if (!exists(tableName))
-      throw new TableNotFoundException(tableName, tableName, "");
-  }
-
-  @Override
-  public Text getMaxRow(String tableName, Authorizations auths, Text startRow, boolean startInclusive, Text endRow, boolean endInclusive)
-      throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
-    MockTable table = acu.tables.get(tableName);
-    if (table == null)
-      throw new TableNotFoundException(tableName, tableName, "no such table");
-
-    return FindMax.findMax(new MockScanner(table, auths), startRow, startInclusive, endRow, endInclusive);
-  }
-
-  @Override
-  public void importTable(String tableName, String exportDir) throws TableExistsException, AccumuloException, AccumuloSecurityException {
-    throw new NotImplementedException();
-  }
-
-  @Override
-  public void exportTable(String tableName, String exportDir) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
-    throw new NotImplementedException();
-  }
-
-  @Override
-  public boolean testClassLoad(String tableName, String className, String asTypeName) throws AccumuloException, AccumuloSecurityException,
-      TableNotFoundException {
-
-    try {
-      AccumuloVFSClassLoader.loadClass(className, Class.forName(asTypeName));
-    } catch (ClassNotFoundException e) {
-      log.warn("Could not load class '" + className + "' with type name '" + asTypeName + "' in testClassLoad().", e);
-      return false;
-    }
-    return true;
-  }
-
-  @Override
-  public void setSamplerConfiguration(String tableName, SamplerConfiguration samplerConfiguration) throws TableNotFoundException, AccumuloException,
-      AccumuloSecurityException {
-    throw new UnsupportedOperationException();
-  }
-
-  @Override
-  public void clearSamplerConfiguration(String tableName) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
-    throw new UnsupportedOperationException();
-  }
-
-  @Override
-  public SamplerConfiguration getSamplerConfiguration(String tableName) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
-    throw new UnsupportedOperationException();
-  }
-
-  @Override
-  public Locations locate(String tableName, Collection<Range> ranges) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
-    throw new UnsupportedOperationException();
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockUser.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockUser.java
deleted file mode 100644
index e32edad..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockUser.java
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.util.EnumSet;
-
-import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.security.SystemPermission;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-public class MockUser {
-  final EnumSet<SystemPermission> permissions;
-  final String name;
-  AuthenticationToken token;
-  Authorizations authorizations;
-
-  MockUser(String principal, AuthenticationToken token, Authorizations auths) {
-    this.name = principal;
-    this.token = token.clone();
-    this.authorizations = auths;
-    this.permissions = EnumSet.noneOf(SystemPermission.class);
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java b/core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
deleted file mode 100644
index a52af79..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
+++ /dev/null
@@ -1,75 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock.impl;
-
-import java.util.Collection;
-import java.util.Collections;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.Instance;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.impl.ClientContext;
-import org.apache.accumulo.core.client.impl.TabletLocator;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.impl.KeyExtent;
-import org.apache.hadoop.io.Text;
-
-/**
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-public class MockTabletLocator extends TabletLocator {
-  public MockTabletLocator() {}
-
-  @Override
-  public TabletLocation locateTablet(ClientContext context, Text row, boolean skipRow, boolean retry) throws AccumuloException, AccumuloSecurityException,
-      TableNotFoundException {
-    throw new UnsupportedOperationException();
-  }
-
-  @Override
-  public <T extends Mutation> void binMutations(ClientContext context, List<T> mutations, Map<String,TabletServerMutations<T>> binnedMutations, List<T> failures)
-      throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
-    TabletServerMutations<T> tsm = new TabletServerMutations<>("5");
-    for (T m : mutations)
-      tsm.addMutation(new KeyExtent(), m);
-    binnedMutations.put("", tsm);
-  }
-
-  @Override
-  public List<Range> binRanges(ClientContext context, List<Range> ranges, Map<String,Map<KeyExtent,List<Range>>> binnedRanges) throws AccumuloException,
-      AccumuloSecurityException, TableNotFoundException {
-    binnedRanges.put("", Collections.singletonMap(new KeyExtent("", null, null), ranges));
-    return Collections.emptyList();
-  }
-
-  @Override
-  public void invalidateCache(KeyExtent failedExtent) {}
-
-  @Override
-  public void invalidateCache(Collection<KeyExtent> keySet) {}
-
-  @Override
-  public void invalidateCache() {}
-
-  @Override
-  public void invalidateCache(Instance instance, String server) {}
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/package-info.java b/core/src/main/java/org/apache/accumulo/core/client/mock/package-info.java
deleted file mode 100644
index cdd5593..0000000
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/package-info.java
+++ /dev/null
@@ -1,25 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-/**
- * Mock framework for Accumulo
- *
- * <p>
- * Deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-package org.apache.accumulo.core.client.mock;
-
diff --git a/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileScanner.java b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileScanner.java
index 4dfba68..7b747d1 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileScanner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileScanner.java
@@ -26,7 +26,6 @@
 import java.util.Map.Entry;
 import java.util.Set;
 import java.util.SortedSet;
-import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.Scanner;
@@ -156,24 +155,6 @@
     throw new UnsupportedOperationException();
   }
 
-  @Deprecated
-  @Override
-  public void setTimeOut(int timeOut) {
-    if (timeOut == Integer.MAX_VALUE)
-      setTimeout(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
-    else
-      setTimeout(timeOut, TimeUnit.SECONDS);
-  }
-
-  @Deprecated
-  @Override
-  public int getTimeOut() {
-    long timeout = getTimeout(TimeUnit.SECONDS);
-    if (timeout >= Integer.MAX_VALUE)
-      return Integer.MAX_VALUE;
-    return (int) timeout;
-  }
-
   @Override
   public void setRange(Range range) {
     this.range = range;
diff --git a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/KerberosToken.java b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/KerberosToken.java
index 1a4869d..26f2d02 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/KerberosToken.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/KerberosToken.java
@@ -69,33 +69,10 @@
    *          A keytab file containing the principal's credentials.
    */
   public KerberosToken(String principal, File keytab) throws IOException {
-    this(principal, keytab, false);
-  }
-
-  /**
-   * Creates a token and logs in via {@link UserGroupInformation} using the provided principal and keytab. A key for the principal must exist in the keytab,
-   * otherwise login will fail.
-   *
-   * @param principal
-   *          The Kerberos principal
-   * @param keytab
-   *          A keytab file
-   * @param replaceCurrentUser
-   *          Should the current Hadoop user be replaced with this user
-   * @deprecated since 1.8.0, @see #KerberosToken(String, File)
-   */
-  @Deprecated
-  public KerberosToken(String principal, File keytab, boolean replaceCurrentUser) throws IOException {
     requireNonNull(principal, "Principal was null");
     requireNonNull(keytab, "Keytab was null");
     checkArgument(keytab.exists() && keytab.isFile(), "Keytab was not a normal file");
-    UserGroupInformation ugi;
-    if (replaceCurrentUser) {
-      UserGroupInformation.loginUserFromKeytab(principal, keytab.getAbsolutePath());
-      ugi = UserGroupInformation.getCurrentUser();
-    } else {
-      ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab.getAbsolutePath());
-    }
+    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab.getAbsolutePath());
     this.principal = ugi.getUserName();
     this.keytab = keytab;
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java b/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
index 23ad278..593f466 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
@@ -23,6 +23,7 @@
 import java.util.Objects;
 import java.util.TreeMap;
 import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloException;
@@ -35,32 +36,12 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Predicate;
-import com.google.common.base.Predicates;
-
 /**
  * A configuration object.
  */
 public abstract class AccumuloConfiguration implements Iterable<Entry<String,String>> {
 
   /**
-   * A filter for properties, based on key.
-   *
-   * @deprecated since 1.7.0; use {@link Predicate} instead.
-   */
-  @Deprecated
-  public interface PropertyFilter {
-    /**
-     * Determines whether to accept a property based on its key.
-     *
-     * @param key
-     *          property key
-     * @return true to accept property (pass filter)
-     */
-    boolean accept(String key);
-  }
-
-  /**
    * A filter that accepts properties whose keys are an exact match.
    */
   public static class MatchFilter implements Predicate<String> {
@@ -78,7 +59,7 @@
     }
 
     @Override
-    public boolean apply(String key) {
+    public boolean test(String key) {
       return Objects.equals(match, key);
     }
   }
@@ -101,7 +82,7 @@
     }
 
     @Override
-    public boolean apply(String key) {
+    public boolean test(String key) {
       return key.startsWith(prefix);
     }
   }
@@ -151,7 +132,7 @@
    */
   @Override
   public Iterator<Entry<String,String>> iterator() {
-    Predicate<String> all = Predicates.alwaysTrue();
+    Predicate<String> all = x -> true;
     TreeMap<String,String> entries = new TreeMap<>();
     getProperties(entries, all);
     return entries.entrySet().iterator();
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationCopy.java b/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationCopy.java
index 28b188f..cf3eb92 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationCopy.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationCopy.java
@@ -20,8 +20,7 @@
 import java.util.HashMap;
 import java.util.Map;
 import java.util.Map.Entry;
-
-import com.google.common.base.Predicate;
+import java.util.function.Predicate;
 
 /**
  * An {@link AccumuloConfiguration} which holds a flat copy of properties defined in another configuration
@@ -66,7 +65,7 @@
   @Override
   public void getProperties(Map<String,String> props, Predicate<String> filter) {
     for (Entry<String,String> entry : copy.entrySet()) {
-      if (filter.apply(entry.getKey())) {
+      if (filter.test(entry.getKey())) {
         props.put(entry.getKey(), entry.getValue());
       }
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java b/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java
index e1ff7e1..9386f99 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java
@@ -20,8 +20,7 @@
 import java.util.HashMap;
 import java.util.Map;
 import java.util.Map.Entry;
-
-import com.google.common.base.Predicate;
+import java.util.function.Predicate;
 
 /**
  * An {@link AccumuloConfiguration} that contains only default values for properties. This class is a singleton.
@@ -55,7 +54,7 @@
   @Override
   public void getProperties(Map<String,String> props, Predicate<String> filter) {
     for (Entry<String,String> entry : resolvedProps.entrySet())
-      if (filter.apply(entry.getKey()))
+      if (filter.test(entry.getKey()))
         props.put(entry.getKey(), entry.getValue());
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/Property.java b/core/src/main/java/org/apache/accumulo/core/conf/Property.java
index c49457f..99a1c11 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/Property.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/Property.java
@@ -159,9 +159,6 @@
    */
   INSTANCE_RPC_SASL_ENABLED("instance.rpc.sasl.enabled", "false", PropertyType.BOOLEAN,
       "Configures Thrift RPCs to require SASL with GSSAPI which supports Kerberos authentication. Mutually exclusive with SSL RPC configuration."),
-  @Deprecated
-  INSTANCE_RPC_SASL_PROXYUSERS("instance.rpc.sasl.impersonation.", null, PropertyType.PREFIX,
-      "Prefix that allows configuration of users that are allowed to impersonate other users"),
   INSTANCE_RPC_SASL_ALLOWED_USER_IMPERSONATION("instance.rpc.sasl.allowed.user.impersonation", "", PropertyType.STRING,
       "One-line configuration property controlling what users are allowed to impersonate other users"),
   INSTANCE_RPC_SASL_ALLOWED_HOST_IMPERSONATION("instance.rpc.sasl.allowed.host.impersonation", "", PropertyType.STRING,
@@ -243,12 +240,6 @@
   TSERV_INDEXCACHE_SIZE("tserver.cache.index.size", "512M", PropertyType.MEMORY, "Specifies the size of the cache for file indices."),
   TSERV_PORTSEARCH("tserver.port.search", "false", PropertyType.BOOLEAN, "if the ports above are in use, search higher ports until one is available"),
   TSERV_CLIENTPORT("tserver.port.client", "9997", PropertyType.PORT, "The port used for handling client connections on the tablet servers"),
-  @Deprecated
-  TSERV_MUTATION_QUEUE_MAX("tserver.mutation.queue.max", "1M", PropertyType.MEMORY, "This setting is deprecated. See tserver.total.mutation.queue.max. "
-      + "The amount of memory to use to store write-ahead-log mutations-per-session before flushing them. Since the buffer is per write session, consider the"
-      + " max number of concurrent writer when configuring. When using Hadoop 2, Accumulo will call hsync() on the WAL . For a small number of "
-      + "concurrent writers, increasing this buffer size decreases the frequncy of hsync calls. For a large number of concurrent writers a small buffers "
-      + "size is ok because of group commit."),
   TSERV_TOTAL_MUTATION_QUEUE_MAX("tserver.total.mutation.queue.max", "50M", PropertyType.MEMORY,
       "The amount of memory used to store write-ahead-log mutations before flushing them."),
   TSERV_TABLET_SPLIT_FINDMIDPOINT_MAXOPEN("tserver.tablet.split.midpoint.files.max", "300", PropertyType.COUNT,
@@ -342,8 +333,6 @@
       "The number of threads for the distributed work queue. These threads are used for copying failed bulk files."),
   TSERV_WAL_SYNC("tserver.wal.sync", "true", PropertyType.BOOLEAN,
       "Use the SYNC_BLOCK create flag to sync WAL writes to disk. Prevents problems recovering from sudden system resets."),
-  @Deprecated
-  TSERV_WAL_SYNC_METHOD("tserver.wal.sync.method", "hsync", PropertyType.STRING, "This property is deprecated. Use table.durability instead."),
   TSERV_ASSIGNMENT_DURATION_WARNING("tserver.assignment.duration.warning", "10m", PropertyType.TIMEDURATION, "The amount of time an assignment can run "
       + " before the server will print a warning along with the current stack trace. Meant to help debug stuck assignments"),
   TSERV_REPLICATION_REPLAYERS("tserver.replication.replayer.", null, PropertyType.PREFIX,
@@ -463,8 +452,6 @@
       "Determines the max # of files each tablet in a table can have. When adjusting this property you may want to consider adjusting"
           + " table.compaction.major.ratio also. Setting this property to 0 will make it default to tserver.scan.files.open.max-1, this will prevent a"
           + " tablet from having more files than can be opened. Setting this property low may throttle ingest and increase query performance."),
-  @Deprecated
-  TABLE_WALOG_ENABLED("table.walog.enabled", "true", PropertyType.BOOLEAN, "This setting is deprecated.  Use table.durability=none instead."),
   TABLE_BLOOM_ENABLED("table.bloom.enabled", "false", PropertyType.BOOLEAN, "Use bloom filters on this table."),
   TABLE_BLOOM_LOAD_THRESHOLD("table.bloom.load.threshold", "1", PropertyType.COUNT,
       "This number of seeks that would actually use a bloom filter must occur before a file's bloom filter is loaded."
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/PropertyType.java b/core/src/main/java/org/apache/accumulo/core/conf/PropertyType.java
index 1120b87..2c458f0 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/PropertyType.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/PropertyType.java
@@ -19,7 +19,8 @@
 import static java.util.Objects.requireNonNull;
 
 import java.util.Arrays;
-import java.util.List;
+import java.util.function.Function;
+import java.util.function.Predicate;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
@@ -28,16 +29,11 @@
 import org.apache.commons.lang.math.IntRange;
 import org.apache.hadoop.fs.Path;
 
-import com.google.common.base.Function;
-import com.google.common.base.Predicate;
-import com.google.common.base.Predicates;
-import com.google.common.collect.Collections2;
-
 /**
  * Types of {@link Property} values. Each type has a short name, a description, and a regex which valid values match. All of these fields are optional.
  */
 public enum PropertyType {
-  PREFIX(null, Predicates.<String> alwaysFalse(), null),
+  PREFIX(null, x -> false, null),
 
   TIMEDURATION("duration", boundedUnits(0, Long.MAX_VALUE, true, "", "ms", "s", "m", "h", "d"),
       "A non-negative integer optionally followed by a unit of time (whitespace disallowed), as in 30s.\n"
@@ -59,7 +55,7 @@
           + "Examples of invalid host lists are '', ':1000', and 'localhost:80000'"),
 
   @SuppressWarnings("unchecked")
-  PORT("port", Predicates.or(new Bounds(1024, 65535), in(true, "0"), new PortRange("\\d{4,5}-\\d{4,5}")),
+  PORT("port", or(new Bounds(1024, 65535), in(true, "0"), new PortRange("\\d{4,5}-\\d{4,5}")),
       "An positive integer in the range 1024-65535 (not already in use or specified elsewhere in the configuration),\n"
           + "zero to indicate any open ephemeral port, or a range of positive integers specified as M-N"),
 
@@ -70,16 +66,12 @@
           + "Examples of valid fractions/percentages are '10', '1000%', '0.05', '5%', '0.2%', '0.0005'.\n"
           + "Examples of invalid fractions/percentages are '', '10 percent', 'Hulk Hogan'"),
 
-  PATH("path", Predicates.<String> alwaysTrue(),
+  PATH("path", x -> true,
       "A string that represents a filesystem path, which can be either relative or absolute to some directory. The filesystem depends on the property. The "
           + "following environment variables will be substituted: " + Constants.PATH_PROPERTY_ENV_VARS),
 
-  ABSOLUTEPATH("absolute path", new Predicate<String>() {
-    @Override
-    public boolean apply(final String input) {
-      return input == null || input.trim().isEmpty() || new Path(input.trim()).isAbsolute();
-    }
-  }, "An absolute filesystem path. The filesystem depends on the property. This is the same as path, but enforces that its root is explicitly specified."),
+  ABSOLUTEPATH("absolute path", x -> x == null || x.trim().isEmpty() || new Path(x.trim()).isAbsolute(),
+      "An absolute filesystem path. The filesystem depends on the property. This is the same as path, but enforces that its root is explicitly specified."),
 
   CLASSNAME("java class", new Matches("[\\w$.]*"), "A fully qualified java class name representing a class on the classpath.\n"
       + "An example is 'java.lang.String', rather than 'String'"),
@@ -89,12 +81,12 @@
 
   DURABILITY("durability", in(true, null, "none", "log", "flush", "sync"), "One of 'none', 'log', 'flush' or 'sync'."),
 
-  STRING("string", Predicates.<String> alwaysTrue(),
+  STRING("string", x -> true,
       "An arbitrary string of characters whose format is unspecified and interpreted based on the context of the property to which it applies."),
 
   BOOLEAN("boolean", in(false, null, "true", "false"), "Has a value of either 'true' or 'false' (case-insensitive)"),
 
-  URI("uri", Predicates.<String> alwaysTrue(), "A valid URI");
+  URI("uri", x -> true, "A valid URI");
 
   private String shortname, format;
   private Predicate<String> predicate;
@@ -125,38 +117,30 @@
    * @return true if value is valid or null, or if this type has no regex
    */
   public boolean isValidFormat(String value) {
-    return predicate.apply(value);
+    return predicate.test(value);
   }
 
-  private static Predicate<String> in(final boolean caseSensitive, final String... strings) {
-    List<String> allowedSet = Arrays.asList(strings);
+  @SuppressWarnings("unchecked")
+  private static Predicate<String> or(final Predicate<String>... others) {
+    return (x) -> Arrays.stream(others).anyMatch(y -> y.test(x));
+  }
+
+  private static Predicate<String> in(final boolean caseSensitive, final String... allowedSet) {
     if (caseSensitive) {
-      return Predicates.in(allowedSet);
+      return x -> Arrays.stream(allowedSet).anyMatch(y -> (x == null && y == null) || (x != null && x.equals(y)));
     } else {
-      Function<String,String> toLower = new Function<String,String>() {
-        @Override
-        public String apply(final String input) {
-          return input == null ? null : input.toLowerCase();
-        }
-      };
-      return Predicates.compose(Predicates.in(Collections2.transform(allowedSet, toLower)), toLower);
+      Function<String,String> toLower = x -> x == null ? null : x.toLowerCase();
+      return x -> Arrays.stream(allowedSet).map(toLower).anyMatch(y -> (x == null && y == null) || (x != null && toLower.apply(x).equals(y)));
     }
   }
 
   private static Predicate<String> boundedUnits(final long lowerBound, final long upperBound, final boolean caseSensitive, final String... suffixes) {
-    return Predicates.or(Predicates.isNull(),
-        Predicates.and(new HasSuffix(caseSensitive, suffixes), Predicates.compose(new Bounds(lowerBound, upperBound), new StripUnits())));
+    Predicate<String> suffixCheck = new HasSuffix(caseSensitive, suffixes);
+    return x -> x == null || (suffixCheck.test(x) && new Bounds(lowerBound, upperBound).test(stripUnits.apply(x)));
   }
 
-  private static class StripUnits implements Function<String,String> {
-    private static Pattern SUFFIX_REGEX = Pattern.compile("[^\\d]*$");
-
-    @Override
-    public String apply(final String input) {
-      requireNonNull(input);
-      return SUFFIX_REGEX.matcher(input.trim()).replaceAll("");
-    }
-  }
+  private static final Pattern SUFFIX_REGEX = Pattern.compile("[^\\d]*$");
+  private static final Function<String,String> stripUnits = x -> x == null ? null : SUFFIX_REGEX.matcher(x.trim()).replaceAll("");
 
   private static class HasSuffix implements Predicate<String> {
 
@@ -167,14 +151,14 @@
     }
 
     @Override
-    public boolean apply(final String input) {
+    public boolean test(final String input) {
       requireNonNull(input);
-      Matcher m = StripUnits.SUFFIX_REGEX.matcher(input);
+      Matcher m = SUFFIX_REGEX.matcher(input);
       if (m.find()) {
         if (m.groupCount() != 0) {
           throw new AssertionError(m.groupCount());
         }
-        return p.apply(m.group());
+        return p.test(m.group());
       } else {
         return true;
       }
@@ -183,7 +167,7 @@
 
   private static class FractionPredicate implements Predicate<String> {
     @Override
-    public boolean apply(final String input) {
+    public boolean test(final String input) {
       if (input == null) {
         return true;
       }
@@ -218,7 +202,7 @@
     }
 
     @Override
-    public boolean apply(final String input) {
+    public boolean test(final String input) {
       if (input == null) {
         return true;
       }
@@ -257,7 +241,7 @@
     }
 
     @Override
-    public boolean apply(final String input) {
+    public boolean test(final String input) {
       // TODO when the input is null, it just means that the property wasn't set
       // we can add checks for not null for required properties with Predicates.and(Predicates.notNull(), ...),
       // or we can stop assuming that null is always okay for a Matches predicate, and do that explicitly with Predicates.or(Predicates.isNull(), ...)
@@ -275,8 +259,8 @@
     }
 
     @Override
-    public boolean apply(final String input) {
-      if (super.apply(input)) {
+    public boolean test(final String input) {
+      if (super.test(input)) {
         try {
           PortRange.parse(input);
           return true;
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java b/core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java
index b2f5a18..e5e78d0 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java
@@ -19,14 +19,13 @@
 import java.io.IOException;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.hadoop.conf.Configuration;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Predicate;
-
 /**
  * An {@link AccumuloConfiguration} which loads properties from an XML file, usually accumulo-site.xml. This implementation supports defaulting undefined
  * property values to a parent configuration's definitions.
@@ -121,7 +120,7 @@
     parent.getProperties(props, filter);
 
     for (Entry<String,String> entry : getXmlConfig())
-      if (filter.apply(entry.getKey()))
+      if (filter.test(entry.getKey()))
         props.put(entry.getKey(), entry.getValue());
 
     // CredentialProvider should take precedence over site
@@ -133,7 +132,7 @@
             continue;
           }
 
-          if (filter.apply(key)) {
+          if (filter.test(key)) {
             char[] value = CredentialProviderFactoryShim.getValueFromCredentialProvider(hadoopConf, key);
             if (null != value) {
               props.put(key, new String(value));
diff --git a/core/src/main/java/org/apache/accumulo/core/constraints/Constraint.java b/core/src/main/java/org/apache/accumulo/core/constraints/Constraint.java
index a936ef5..b70afc6 100644
--- a/core/src/main/java/org/apache/accumulo/core/constraints/Constraint.java
+++ b/core/src/main/java/org/apache/accumulo/core/constraints/Constraint.java
@@ -21,7 +21,6 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.security.AuthorizationContainer;
-import org.apache.accumulo.core.security.Authorizations;
 
 /**
  * Constraint objects are used to determine if mutations will be applied to a table.
@@ -62,15 +61,6 @@
      * Gets the authorizations in the environment.
      *
      * @return authorizations
-     * @deprecated Use {@link #getAuthorizationsContainer()} instead.
-     */
-    @Deprecated
-    Authorizations getAuthorizations();
-
-    /**
-     * Gets the authorizations in the environment.
-     *
-     * @return authorizations
      */
     AuthorizationContainer getAuthorizationsContainer();
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/data/ComparableBytes.java b/core/src/main/java/org/apache/accumulo/core/data/ComparableBytes.java
deleted file mode 100644
index 78c0e56..0000000
--- a/core/src/main/java/org/apache/accumulo/core/data/ComparableBytes.java
+++ /dev/null
@@ -1,58 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.data;
-
-import org.apache.hadoop.io.BinaryComparable;
-
-/**
- * An array of bytes wrapped so as to extend Hadoop's <code>BinaryComparable</code> class.
- *
- * @deprecated since 1.7.0 In an attempt to clean up types in the data package that were not intended to be in public API this type was deprecated. Technically
- *             this method was not considered part of the public API in 1.6.0 and earlier, therefore it could have been deleted. However a decision was made to
- *             deprecate in order to be cautious and avoid confusion between 1.6.0 and 1.7.0.
- */
-@Deprecated
-public class ComparableBytes extends BinaryComparable {
-
-  public byte[] data;
-
-  /**
-   * Creates a new byte wrapper. The given byte array is used directly as a backing array, so later changes made to the array reflect into the new object.
-   *
-   * @param b
-   *          bytes to wrap
-   */
-  public ComparableBytes(byte[] b) {
-    this.data = b;
-  }
-
-  /**
-   * Gets the wrapped bytes in this object.
-   *
-   * @return bytes
-   */
-  @Override
-  public byte[] getBytes() {
-    return data;
-  }
-
-  @Override
-  public int getLength() {
-    return data.length;
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/data/KeyExtent.java b/core/src/main/java/org/apache/accumulo/core/data/KeyExtent.java
deleted file mode 100644
index 4e3d058..0000000
--- a/core/src/main/java/org/apache/accumulo/core/data/KeyExtent.java
+++ /dev/null
@@ -1,259 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.data;
-
-import java.io.DataInput;
-import java.io.DataOutput;
-import java.io.IOException;
-import java.util.Collection;
-import java.util.Map.Entry;
-import java.util.Set;
-import java.util.SortedMap;
-import java.util.SortedSet;
-import java.util.TreeMap;
-import java.util.TreeSet;
-import java.util.UUID;
-
-import org.apache.accumulo.core.data.thrift.TKeyExtent;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema;
-import org.apache.hadoop.io.BinaryComparable;
-import org.apache.hadoop.io.Text;
-import org.apache.hadoop.io.WritableComparable;
-
-/**
- * keeps track of information needed to identify a tablet
- *
- * @deprecated since 1.7.0 use {@link TabletId}
- */
-@Deprecated
-public class KeyExtent implements WritableComparable<KeyExtent> {
-
-  // Wrapping impl.KeyExtent to resuse code. Did not want to extend impl.KeyExtent because any changes to impl.KeyExtent would be reflected in this class.
-  // Wrapping impl.KeyExtent allows the API of this deprecated class to be frozen.
-  private org.apache.accumulo.core.data.impl.KeyExtent wrapped;
-
-  public KeyExtent() {
-    this.wrapped = new org.apache.accumulo.core.data.impl.KeyExtent();
-  }
-
-  public KeyExtent(Text table, Text endRow, Text prevEndRow) {
-    this.wrapped = new org.apache.accumulo.core.data.impl.KeyExtent(table.toString(), endRow, prevEndRow);
-  }
-
-  public KeyExtent(KeyExtent extent) {
-    this.wrapped = new org.apache.accumulo.core.data.impl.KeyExtent(extent.getTableId().toString(), extent.getEndRow(), extent.getPrevEndRow());
-  }
-
-  public KeyExtent(TKeyExtent tke) {
-    this.wrapped = new org.apache.accumulo.core.data.impl.KeyExtent(tke);
-  }
-
-  // constructor for loading extents from metadata rows
-  public KeyExtent(Text flattenedExtent, Value prevEndRow) {
-    this.wrapped = new org.apache.accumulo.core.data.impl.KeyExtent(flattenedExtent, prevEndRow);
-  }
-
-  // recreates an encoded extent from a string representation
-  // this encoding is what is stored as the row id of the metadata table
-  public KeyExtent(Text flattenedExtent, Text prevEndRow) {
-    this.wrapped = new org.apache.accumulo.core.data.impl.KeyExtent(flattenedExtent, prevEndRow);
-  }
-
-  public Text getMetadataEntry() {
-    return wrapped.getMetadataEntry();
-  }
-
-  public void setTableId(Text tId) {
-    wrapped.setTableId(tId.toString());
-  }
-
-  public Text getTableId() {
-    return new Text(wrapped.getTableId());
-  }
-
-  public void setEndRow(Text endRow) {
-    wrapped.setEndRow(endRow);
-  }
-
-  public Text getEndRow() {
-    return wrapped.getEndRow();
-  }
-
-  public Text getPrevEndRow() {
-    return wrapped.getPrevEndRow();
-  }
-
-  public void setPrevEndRow(Text prevEndRow) {
-    wrapped.setPrevEndRow(prevEndRow);
-  }
-
-  @Override
-  public void readFields(DataInput in) throws IOException {
-    wrapped.readFields(in);
-  }
-
-  @Override
-  public void write(DataOutput out) throws IOException {
-    wrapped.write(out);
-  }
-
-  public Mutation getPrevRowUpdateMutation() {
-    return wrapped.getPrevRowUpdateMutation();
-  }
-
-  @Override
-  public int compareTo(KeyExtent other) {
-    return wrapped.compareTo(other.wrapped);
-  }
-
-  @Override
-  public int hashCode() {
-    return wrapped.hashCode();
-  }
-
-  @Override
-  public boolean equals(Object o) {
-    if (o instanceof KeyExtent) {
-      return wrapped.equals(((KeyExtent) o).wrapped);
-    }
-
-    return false;
-  }
-
-  @Override
-  public String toString() {
-    return wrapped.toString();
-  }
-
-  public UUID getUUID() {
-    return wrapped.getUUID();
-  }
-
-  public boolean contains(ByteSequence bsrow) {
-    return wrapped.contains(bsrow);
-  }
-
-  public boolean contains(BinaryComparable row) {
-    return wrapped.contains(row);
-  }
-
-  public Range toDataRange() {
-    return wrapped.toDataRange();
-  }
-
-  public Range toMetadataRange() {
-    return wrapped.toMetadataRange();
-  }
-
-  public boolean overlaps(KeyExtent other) {
-    return wrapped.overlaps(other.wrapped);
-  }
-
-  public TKeyExtent toThrift() {
-    return wrapped.toThrift();
-  }
-
-  public boolean isPreviousExtent(KeyExtent prevExtent) {
-    return wrapped.isPreviousExtent(prevExtent.wrapped);
-  }
-
-  public boolean isMeta() {
-    return wrapped.isMeta();
-  }
-
-  public boolean isRootTablet() {
-    return wrapped.isRootTablet();
-  }
-
-  private static SortedSet<org.apache.accumulo.core.data.impl.KeyExtent> unwrap(Set<KeyExtent> tablets) {
-    SortedSet<org.apache.accumulo.core.data.impl.KeyExtent> trans = new TreeSet<>();
-    for (KeyExtent wrapper : tablets) {
-      trans.add(wrapper.wrapped);
-    }
-
-    return trans;
-  }
-
-  private static KeyExtent wrap(org.apache.accumulo.core.data.impl.KeyExtent ke) {
-    return new KeyExtent(new Text(ke.getTableId()), ke.getEndRow(), ke.getPrevEndRow());
-  }
-
-  private static SortedSet<KeyExtent> wrap(Collection<org.apache.accumulo.core.data.impl.KeyExtent> unwrapped) {
-    SortedSet<KeyExtent> wrapped = new TreeSet<>();
-    for (org.apache.accumulo.core.data.impl.KeyExtent wrappee : unwrapped) {
-      wrapped.add(wrap(wrappee));
-    }
-
-    return wrapped;
-  }
-
-  public static Text getMetadataEntry(Text tableId, Text endRow) {
-    return MetadataSchema.TabletsSection.getRow(tableId.toString(), endRow);
-  }
-
-  /**
-   * Empty start or end rows tell the method there are no start or end rows, and to use all the keyextents that are before the end row if no start row etc.
-   *
-   * @deprecated this method not intended for public use and is likely to be removed in a future version.
-   * @return all the key extents that the rows cover
-   */
-  @Deprecated
-  public static Collection<KeyExtent> getKeyExtentsForRange(Text startRow, Text endRow, Set<KeyExtent> kes) {
-    return wrap(org.apache.accumulo.core.data.impl.KeyExtent.getKeyExtentsForRange(startRow, endRow, unwrap(kes)));
-  }
-
-  public static Text decodePrevEndRow(Value ibw) {
-    return org.apache.accumulo.core.data.impl.KeyExtent.decodePrevEndRow(ibw);
-  }
-
-  public static Value encodePrevEndRow(Text per) {
-    return org.apache.accumulo.core.data.impl.KeyExtent.encodePrevEndRow(per);
-  }
-
-  public static Mutation getPrevRowUpdateMutation(KeyExtent ke) {
-    return org.apache.accumulo.core.data.impl.KeyExtent.getPrevRowUpdateMutation(ke.wrapped);
-  }
-
-  public static byte[] tableOfMetadataRow(Text row) {
-    return org.apache.accumulo.core.data.impl.KeyExtent.tableOfMetadataRow(row);
-  }
-
-  public static SortedSet<KeyExtent> findChildren(KeyExtent ke, SortedSet<KeyExtent> tablets) {
-    return wrap(org.apache.accumulo.core.data.impl.KeyExtent.findChildren(ke.wrapped, unwrap(tablets)));
-  }
-
-  public static KeyExtent findContainingExtent(KeyExtent extent, SortedSet<KeyExtent> extents) {
-    return wrap(org.apache.accumulo.core.data.impl.KeyExtent.findContainingExtent(extent.wrapped, unwrap(extents)));
-  }
-
-  public static Set<KeyExtent> findOverlapping(KeyExtent nke, SortedSet<KeyExtent> extents) {
-    return wrap(org.apache.accumulo.core.data.impl.KeyExtent.findOverlapping(nke.wrapped, unwrap(extents)));
-  }
-
-  public static Set<KeyExtent> findOverlapping(KeyExtent nke, SortedMap<KeyExtent,?> extents) {
-    SortedMap<org.apache.accumulo.core.data.impl.KeyExtent,Object> trans = new TreeMap<>();
-    for (Entry<KeyExtent,?> entry : extents.entrySet()) {
-      trans.put(entry.getKey().wrapped, entry.getValue());
-    }
-
-    return wrap(org.apache.accumulo.core.data.impl.KeyExtent.findOverlapping(nke.wrapped, trans));
-  }
-
-  public static Text getMetadataEntry(KeyExtent extent) {
-    return org.apache.accumulo.core.data.impl.KeyExtent.getMetadataEntry(extent.wrapped);
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/data/PartialKey.java b/core/src/main/java/org/apache/accumulo/core/data/PartialKey.java
index bf0df1e..8ff0017 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/PartialKey.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/PartialKey.java
@@ -31,24 +31,6 @@
   }
 
   /**
-   * Get a partial key specification by depth of the specification.
-   *
-   * @param depth
-   *          depth of scope (i.e., number of fields included)
-   * @return partial key
-   * @throws IllegalArgumentException
-   *           if no partial key has the given depth
-   * @deprecated since 1.7.0
-   */
-  @Deprecated
-  public static PartialKey getByDepth(int depth) {
-    for (PartialKey d : PartialKey.values())
-      if (depth == d.depth)
-        return d;
-    throw new IllegalArgumentException("Invalid legacy depth " + depth);
-  }
-
-  /**
    * Gets the depth of this partial key.
    *
    * @return depth
diff --git a/core/src/main/java/org/apache/accumulo/core/data/Value.java b/core/src/main/java/org/apache/accumulo/core/data/Value.java
index 95c3c70..9c63a6a 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/Value.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/Value.java
@@ -25,8 +25,6 @@
 import java.io.DataOutput;
 import java.io.IOException;
 import java.nio.ByteBuffer;
-import java.nio.charset.StandardCharsets;
-import java.util.List;
 
 import org.apache.hadoop.io.BytesWritable;
 import org.apache.hadoop.io.Text;
@@ -58,7 +56,7 @@
    * @since 1.8.0
    */
   public Value(CharSequence cs) {
-    this(cs.toString().getBytes(StandardCharsets.UTF_8));
+    this(cs.toString().getBytes(UTF_8));
   }
 
   /**
@@ -96,20 +94,6 @@
   }
 
   /**
-   * @deprecated A copy of the bytes in the buffer is always made. Use {@link #Value(ByteBuffer)} instead.
-   *
-   * @param bytes
-   *          bytes of value (may not be null)
-   * @param copy
-   *          false to use the backing array of the buffer directly as the backing array, true to force a copy
-   */
-  @Deprecated
-  public Value(ByteBuffer bytes, boolean copy) {
-    /* TODO ACCUMULO-2509 right now this uses the entire backing array, which must be accessible. */
-    this(toBytes(bytes), false);
-  }
-
-  /**
    * Creates a Value using a byte array as the initial value.
    *
    * @param bytes
@@ -278,22 +262,4 @@
     WritableComparator.define(Value.class, new Comparator());
   }
 
-  /**
-   * Converts a list of byte arrays to a two-dimensional array.
-   *
-   * @param array
-   *          list of byte arrays
-   * @return two-dimensional byte array containing one given byte array per row
-   * @deprecated since 1.7.0; this utility method is not appropriate for the {@link Value} object
-   */
-  @Deprecated
-  public static byte[][] toArray(final List<byte[]> array) {
-    // List#toArray doesn't work on lists of byte [].
-    byte[][] results = new byte[array.size()][];
-    for (int i = 0; i < array.size(); i++) {
-      results[i] = array.get(i);
-    }
-    return results;
-  }
-
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/data/impl/KeyExtent.java b/core/src/main/java/org/apache/accumulo/core/data/impl/KeyExtent.java
index dcb8eb7..304abb8 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/impl/KeyExtent.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/impl/KeyExtent.java
@@ -25,8 +25,6 @@
 import java.io.IOException;
 import java.lang.ref.WeakReference;
 import java.nio.ByteBuffer;
-import java.util.ArrayList;
-import java.util.Collection;
 import java.util.Collections;
 import java.util.Map.Entry;
 import java.util.Set;
@@ -295,77 +293,6 @@
     return getPrevRowUpdateMutation(this);
   }
 
-  /**
-   * Empty start or end rows tell the method there are no start or end rows, and to use all the keyextents that are before the end row if no start row etc.
-   *
-   * @deprecated this method not intended for public use and is likely to be removed in a future version.
-   * @return all the key extents that the rows cover
-   */
-  @Deprecated
-  public static Collection<KeyExtent> getKeyExtentsForRange(Text startRow, Text endRow, Set<KeyExtent> kes) {
-    if (kes == null)
-      return Collections.emptyList();
-    if (startRow == null)
-      startRow = new Text();
-    if (endRow == null)
-      endRow = new Text();
-    Collection<KeyExtent> keys = new ArrayList<>();
-    for (KeyExtent ckes : kes) {
-      if (ckes.getPrevEndRow() == null) {
-        if (ckes.getEndRow() == null) {
-          // only tablet
-          keys.add(ckes);
-        } else {
-          // first tablet
-          // if start row = '' then we want everything up to the endRow which will always include the first tablet
-          if (startRow.getLength() == 0) {
-            keys.add(ckes);
-          } else if (ckes.getEndRow().compareTo(startRow) >= 0) {
-            keys.add(ckes);
-          }
-        }
-      } else {
-        if (ckes.getEndRow() == null) {
-          // last tablet
-          // if endRow = '' and we're at the last tablet, add it
-          if (endRow.getLength() == 0) {
-            keys.add(ckes);
-          }
-          if (ckes.getPrevEndRow().compareTo(endRow) < 0) {
-            keys.add(ckes);
-          }
-        } else {
-          // tablet in the middle
-          if (startRow.getLength() == 0) {
-            // no start row
-
-            if (endRow.getLength() == 0) {
-              // no start & end row
-              keys.add(ckes);
-            } else {
-              // just no start row
-              if (ckes.getPrevEndRow().compareTo(endRow) < 0) {
-                keys.add(ckes);
-              }
-            }
-          } else if (endRow.getLength() == 0) {
-            // no end row
-            if (ckes.getEndRow().compareTo(startRow) >= 0) {
-              keys.add(ckes);
-            }
-          } else {
-            // no null prevend or endrows and no empty string start or end rows
-            if (ckes.getPrevEndRow().compareTo(endRow) < 0 && ckes.getEndRow().compareTo(startRow) >= 0) {
-              keys.add(ckes);
-            }
-          }
-
-        }
-      }
-    }
-    return keys;
-  }
-
   public static Text decodePrevEndRow(Value ibw) {
     Text per = null;
 
diff --git a/core/src/main/java/org/apache/accumulo/core/data/impl/TabletIdImpl.java b/core/src/main/java/org/apache/accumulo/core/data/impl/TabletIdImpl.java
index 24a7141..d34e379 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/impl/TabletIdImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/impl/TabletIdImpl.java
@@ -21,41 +21,10 @@
 import org.apache.accumulo.core.data.TabletId;
 import org.apache.hadoop.io.Text;
 
-import com.google.common.base.Function;
-
 public class TabletIdImpl implements TabletId {
 
   private KeyExtent ke;
 
-  @SuppressWarnings("deprecation")
-  public static final Function<org.apache.accumulo.core.data.KeyExtent,TabletId> KE_2_TID_OLD = new Function<org.apache.accumulo.core.data.KeyExtent,TabletId>() {
-    @Override
-    public TabletId apply(org.apache.accumulo.core.data.KeyExtent input) {
-      // the following if null check is to appease findbugs... grumble grumble spent a good part of my morning looking into this
-      // http://sourceforge.net/p/findbugs/bugs/1139/
-      // https://code.google.com/p/guava-libraries/issues/detail?id=920
-      if (input == null)
-        return null;
-      return new TabletIdImpl(input);
-    }
-  };
-
-  @SuppressWarnings("deprecation")
-  public static final Function<TabletId,org.apache.accumulo.core.data.KeyExtent> TID_2_KE_OLD = new Function<TabletId,org.apache.accumulo.core.data.KeyExtent>() {
-    @Override
-    public org.apache.accumulo.core.data.KeyExtent apply(TabletId input) {
-      if (input == null)
-        return null;
-      return new org.apache.accumulo.core.data.KeyExtent(input.getTableId(), input.getEndRow(), input.getPrevEndRow());
-    }
-
-  };
-
-  @Deprecated
-  public TabletIdImpl(org.apache.accumulo.core.data.KeyExtent ke) {
-    this.ke = new KeyExtent(ke.getTableId().toString(), ke.getEndRow(), ke.getPrevEndRow());
-  }
-
   public TabletIdImpl(KeyExtent ke) {
     this.ke = ke;
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/SeekableByteArrayInputStream.java b/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/SeekableByteArrayInputStream.java
index c6e7d29..4388d42 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/SeekableByteArrayInputStream.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/SeekableByteArrayInputStream.java
@@ -17,11 +17,11 @@
 
 package org.apache.accumulo.core.file.blockfile.impl;
 
+import static java.util.Objects.requireNonNull;
+
 import java.io.IOException;
 import java.io.InputStream;
 
-import com.google.common.base.Preconditions;
-
 /**
  * This class is like byte array input stream with two differences. It supports seeking and avoids synchronization.
  */
@@ -112,14 +112,14 @@
   public void close() throws IOException {}
 
   public SeekableByteArrayInputStream(byte[] buf) {
-    Preconditions.checkNotNull(buf, "bug argument was null");
+    requireNonNull(buf, "bug argument was null");
     this.buffer = buf;
     this.cur = 0;
     this.max = buf.length;
   }
 
   public SeekableByteArrayInputStream(byte[] buf, int maxOffset) {
-    Preconditions.checkNotNull(buf, "bug argument was null");
+    requireNonNull(buf, "bug argument was null");
     this.buffer = buf;
     this.cur = 0;
     this.max = maxOffset;
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/MultiLevelIndex.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/MultiLevelIndex.java
index f99560e..11e3209 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/MultiLevelIndex.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/MultiLevelIndex.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.core.file.rfile;
 
+import static java.util.Objects.requireNonNull;
+
 import java.io.ByteArrayInputStream;
 import java.io.ByteArrayOutputStream;
 import java.io.DataInput;
@@ -41,8 +43,6 @@
 import org.apache.accumulo.core.file.rfile.bcfile.Utils;
 import org.apache.hadoop.io.WritableComparable;
 
-import com.google.common.base.Preconditions;
-
 public class MultiLevelIndex {
 
   public static class IndexEntry implements WritableComparable<IndexEntry> {
@@ -144,8 +144,8 @@
     protected int indexSize;
 
     SerializedIndexBase(int[] offsets, byte[] data) {
-      Preconditions.checkNotNull(offsets, "offsets argument was null");
-      Preconditions.checkNotNull(data, "data argument was null");
+      requireNonNull(offsets, "offsets argument was null");
+      requireNonNull(data, "data argument was null");
       this.offsets = offsets;
       this.data = data;
       sbais = new SeekableByteArrayInputStream(data);
@@ -153,7 +153,7 @@
     }
 
     SerializedIndexBase(byte[] data, int offsetsOffset, int numOffsets, int indexOffset, int indexSize) {
-      Preconditions.checkNotNull(data, "data argument was null");
+      requireNonNull(data, "data argument was null");
       sbais = new SeekableByteArrayInputStream(data, indexOffset + indexSize);
       dis = new DataInputStream(sbais);
       this.offsetsOffset = offsetsOffset;
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/AggregatingIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/AggregatingIterator.java
deleted file mode 100644
index 979eaeb..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/AggregatingIterator.java
+++ /dev/null
@@ -1,215 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators;
-
-import java.io.IOException;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.Map;
-import java.util.Map.Entry;
-
-import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.data.ByteSequence;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.PartialKey;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.conf.ColumnToClassMapping;
-import org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * This iterator wraps another iterator. It automatically aggregates.
- *
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.Combiner}
- */
-
-@Deprecated
-public class AggregatingIterator implements SortedKeyValueIterator<Key,Value>, OptionDescriber {
-
-  private SortedKeyValueIterator<Key,Value> iterator;
-  private ColumnToClassMapping<org.apache.accumulo.core.iterators.aggregation.Aggregator> aggregators;
-
-  private Key workKey = new Key();
-
-  private Key aggrKey;
-  private Value aggrValue;
-  // private boolean propogateDeletes;
-  private static final Logger log = LoggerFactory.getLogger(AggregatingIterator.class);
-
-  @Override
-  public AggregatingIterator deepCopy(IteratorEnvironment env) {
-    return new AggregatingIterator(this, env);
-  }
-
-  private AggregatingIterator(AggregatingIterator other, IteratorEnvironment env) {
-    iterator = other.iterator.deepCopy(env);
-    aggregators = other.aggregators;
-  }
-
-  public AggregatingIterator() {}
-
-  private void aggregateRowColumn(org.apache.accumulo.core.iterators.aggregation.Aggregator aggr) throws IOException {
-    // this function assumes that first value is not delete
-
-    if (iterator.getTopKey().isDeleted())
-      return;
-
-    workKey.set(iterator.getTopKey());
-
-    Key keyToAggregate = workKey;
-
-    aggr.reset();
-
-    aggr.collect(iterator.getTopValue());
-    iterator.next();
-
-    while (iterator.hasTop() && !iterator.getTopKey().isDeleted() && iterator.getTopKey().equals(keyToAggregate, PartialKey.ROW_COLFAM_COLQUAL_COLVIS)) {
-      aggr.collect(iterator.getTopValue());
-      iterator.next();
-    }
-
-    aggrKey = workKey;
-    aggrValue = aggr.aggregate();
-
-  }
-
-  private void findTop() throws IOException {
-    // check if aggregation is needed
-    if (iterator.hasTop()) {
-      org.apache.accumulo.core.iterators.aggregation.Aggregator aggr = aggregators.getObject(iterator.getTopKey());
-      if (aggr != null) {
-        aggregateRowColumn(aggr);
-      }
-    }
-  }
-
-  public AggregatingIterator(SortedKeyValueIterator<Key,Value> iterator,
-      ColumnToClassMapping<org.apache.accumulo.core.iterators.aggregation.Aggregator> aggregators) throws IOException {
-    this.iterator = iterator;
-    this.aggregators = aggregators;
-  }
-
-  @Override
-  public Key getTopKey() {
-    if (aggrKey != null) {
-      return aggrKey;
-    }
-    return iterator.getTopKey();
-  }
-
-  @Override
-  public Value getTopValue() {
-    if (aggrKey != null) {
-      return aggrValue;
-    }
-    return iterator.getTopValue();
-  }
-
-  @Override
-  public boolean hasTop() {
-    return aggrKey != null || iterator.hasTop();
-  }
-
-  @Override
-  public void next() throws IOException {
-    if (aggrKey != null) {
-      aggrKey = null;
-      aggrValue = null;
-    } else {
-      iterator.next();
-    }
-
-    findTop();
-  }
-
-  @Override
-  public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {
-    // do not want to seek to the middle of a value that should be
-    // aggregated...
-
-    Range seekRange = IteratorUtil.maximizeStartKeyTimeStamp(range);
-
-    iterator.seek(seekRange, columnFamilies, inclusive);
-    findTop();
-
-    if (range.getStartKey() != null) {
-      while (hasTop() && getTopKey().equals(range.getStartKey(), PartialKey.ROW_COLFAM_COLQUAL_COLVIS)
-          && getTopKey().getTimestamp() > range.getStartKey().getTimestamp()) {
-        // the value has a more recent time stamp, so
-        // pass it up
-        // log.debug("skipping "+getTopKey());
-        next();
-      }
-
-      while (hasTop() && range.beforeStartKey(getTopKey())) {
-        next();
-      }
-    }
-
-  }
-
-  @Override
-  public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
-
-    this.iterator = source;
-
-    try {
-      String context = null;
-      if (null != env)
-        context = env.getConfig().get(Property.TABLE_CLASSPATH);
-      this.aggregators = new ColumnToClassMapping<>(options, org.apache.accumulo.core.iterators.aggregation.Aggregator.class, context);
-    } catch (ClassNotFoundException e) {
-      log.error(e.toString());
-      throw new IllegalArgumentException(e);
-    } catch (InstantiationException e) {
-      log.error(e.toString());
-      throw new IllegalArgumentException(e);
-    } catch (IllegalAccessException e) {
-      log.error(e.toString());
-      throw new IllegalArgumentException(e);
-    }
-  }
-
-  @Override
-  public IteratorOptions describeOptions() {
-    return new IteratorOptions("agg", "Aggregators apply aggregating functions to values with identical keys", null,
-        Collections.singletonList("<columnName> <aggregatorClass>"));
-  }
-
-  @Override
-  public boolean validateOptions(Map<String,String> options) {
-    for (Entry<String,String> entry : options.entrySet()) {
-      String classname = entry.getValue();
-      if (classname == null)
-        throw new IllegalArgumentException("classname null");
-      Class<? extends org.apache.accumulo.core.iterators.aggregation.Aggregator> clazz;
-      try {
-        clazz = AccumuloVFSClassLoader.loadClass(classname, org.apache.accumulo.core.iterators.aggregation.Aggregator.class);
-        clazz.newInstance();
-      } catch (ClassNotFoundException e) {
-        throw new IllegalArgumentException("class not found: " + classname);
-      } catch (InstantiationException e) {
-        throw new IllegalArgumentException("instantiation exception: " + classname);
-      } catch (IllegalAccessException e) {
-        throw new IllegalArgumentException("illegal access exception: " + classname);
-      }
-    }
-    return true;
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/FamilyIntersectingIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/FamilyIntersectingIterator.java
deleted file mode 100644
index 04102b8..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/FamilyIntersectingIterator.java
+++ /dev/null
@@ -1,29 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators;
-
-import org.apache.accumulo.core.iterators.user.IndexedDocIterator;
-
-/**
- * This class remains here for backwards compatibility.
- *
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.user.IndexedDocIterator}
- */
-@Deprecated
-public class FamilyIntersectingIterator extends IndexedDocIterator {
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/GrepIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/GrepIterator.java
deleted file mode 100644
index 5c44c31..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/GrepIterator.java
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators;
-
-/**
- * This class remains here for backwards compatibility.
- *
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.user.GrepIterator}
- */
-@Deprecated
-public class GrepIterator extends org.apache.accumulo.core.iterators.user.GrepIterator {
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/IntersectingIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/IntersectingIterator.java
deleted file mode 100644
index 5765982..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/IntersectingIterator.java
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators;
-
-/**
- * This class remains here for backwards compatibility.
- *
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.user.IntersectingIterator}
- */
-@Deprecated
-public class IntersectingIterator extends org.apache.accumulo.core.iterators.user.IntersectingIterator {
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/IteratorUtil.java b/core/src/main/java/org/apache/accumulo/core/iterators/IteratorUtil.java
index 981404c..1d5728b 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/IteratorUtil.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/IteratorUtil.java
@@ -70,17 +70,6 @@
    */
   public static enum IteratorScope {
     majc, minc, scan;
-
-    /**
-     * Fetch the correct configuration key prefix for the given scope. Throws an IllegalArgumentException if no property exists for the given scope.
-     *
-     * @deprecated since 1.7.0 This method returns a type that is not part of the public API and is not guaranteed to be stable. The method was deprecated to
-     *             discourage its use.
-     */
-    @Deprecated
-    public static Property getProperty(IteratorScope scope) {
-      return IteratorUtil.getProperty(scope);
-    }
   }
 
   public static class IterInfoComparator implements Comparator<IterInfo>, Serializable {
@@ -266,6 +255,7 @@
       for (IterInfo iterInfo : iters) {
 
         Class<? extends SortedKeyValueIterator<K,V>> clazz = null;
+        log.trace("Attempting to load iterator class {}", iterInfo.className);
         if (classCache != null) {
           clazz = classCache.get(iterInfo.className);
 
@@ -305,13 +295,17 @@
       String context, IterInfo iterInfo) throws ClassNotFoundException, IOException {
     Class<? extends SortedKeyValueIterator<K,V>> clazz;
     if (useAccumuloClassLoader) {
-      if (context != null && !context.equals(""))
+      if (context != null && !context.equals("")) {
         clazz = (Class<? extends SortedKeyValueIterator<K,V>>) AccumuloVFSClassLoader.getContextManager().loadClass(context, iterInfo.className,
             SortedKeyValueIterator.class);
-      else
+        log.trace("Iterator class {} loaded from context {}, classloader: {}", iterInfo.className, context, clazz.getClassLoader());
+      } else {
         clazz = (Class<? extends SortedKeyValueIterator<K,V>>) AccumuloVFSClassLoader.loadClass(iterInfo.className, SortedKeyValueIterator.class);
+        log.trace("Iterator class {} loaded from AccumuloVFSClassLoader: {}", iterInfo.className, clazz.getClassLoader());
+      }
     } else {
       clazz = (Class<? extends SortedKeyValueIterator<K,V>>) Class.forName(iterInfo.className).asSubclass(SortedKeyValueIterator.class);
+      log.trace("Iterator class {} loaded from classpath", iterInfo.className);
     }
     return clazz;
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/LargeRowFilter.java b/core/src/main/java/org/apache/accumulo/core/iterators/LargeRowFilter.java
deleted file mode 100644
index 75155f9..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/LargeRowFilter.java
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators;
-
-/**
- * This class remains here for backwards compatibility.
- *
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.user.LargeRowFilter}
- */
-@Deprecated
-public class LargeRowFilter extends org.apache.accumulo.core.iterators.user.LargeRowFilter {
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/RowDeletingIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/RowDeletingIterator.java
deleted file mode 100644
index ee6989f..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/RowDeletingIterator.java
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators;
-
-/**
- * This class remains here for backwards compatibility.
- *
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.user.RowDeletingIterator}
- */
-@Deprecated
-public class RowDeletingIterator extends org.apache.accumulo.core.iterators.user.RowDeletingIterator {
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/VersioningIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/VersioningIterator.java
deleted file mode 100644
index d849275..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/VersioningIterator.java
+++ /dev/null
@@ -1,36 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators;
-
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Value;
-
-/**
- * This class remains here for backwards compatibility.
- *
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.user.VersioningIterator}
- */
-@Deprecated
-public class VersioningIterator extends org.apache.accumulo.core.iterators.user.VersioningIterator {
-  public VersioningIterator() {}
-
-  public VersioningIterator(SortedKeyValueIterator<Key,Value> iterator, int maxVersions) {
-    super();
-    this.setSource(iterator);
-    this.maxVersions = maxVersions;
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/WholeRowIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/WholeRowIterator.java
deleted file mode 100644
index 7432a88..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/WholeRowIterator.java
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators;
-
-/**
- * This class remains here for backwards compatibility.
- *
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.user.WholeRowIterator}
- */
-@Deprecated
-public class WholeRowIterator extends org.apache.accumulo.core.iterators.user.WholeRowIterator {
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/Aggregator.java b/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/Aggregator.java
deleted file mode 100644
index f9183dc..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/Aggregator.java
+++ /dev/null
@@ -1,31 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators.aggregation;
-
-import org.apache.accumulo.core.data.Value;
-
-/**
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.Combiner}
- */
-@Deprecated
-public interface Aggregator {
-  void reset();
-
-  void collect(Value value);
-
-  Value aggregate();
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/LongSummation.java b/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/LongSummation.java
deleted file mode 100644
index 7692ecb..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/LongSummation.java
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators.aggregation;
-
-import java.io.IOException;
-
-import org.apache.accumulo.core.data.Value;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.user.SummingCombiner} with
- *             {@link org.apache.accumulo.core.iterators.LongCombiner.Type#FIXEDLEN}
- */
-@Deprecated
-public class LongSummation implements Aggregator {
-  private static final Logger log = LoggerFactory.getLogger(LongSummation.class);
-  long sum = 0;
-
-  @Override
-  public Value aggregate() {
-    return new Value(longToBytes(sum));
-  }
-
-  @Override
-  public void collect(Value value) {
-    try {
-      sum += bytesToLong(value.get());
-    } catch (IOException e) {
-      log.error(LongSummation.class.getSimpleName() + " trying to convert bytes to long, but byte array isn't length 8");
-    }
-  }
-
-  @Override
-  public void reset() {
-    sum = 0;
-  }
-
-  public static long bytesToLong(byte[] b) throws IOException {
-    return bytesToLong(b, 0);
-  }
-
-  public static long bytesToLong(byte[] b, int offset) throws IOException {
-    if (b.length < offset + 8)
-      throw new IOException("trying to convert to long, but byte array isn't long enough, wanted " + (offset + 8) + " found " + b.length);
-    return (((long) b[offset + 0] << 56) + ((long) (b[offset + 1] & 255) << 48) + ((long) (b[offset + 2] & 255) << 40) + ((long) (b[offset + 3] & 255) << 32)
-        + ((long) (b[offset + 4] & 255) << 24) + ((b[offset + 5] & 255) << 16) + ((b[offset + 6] & 255) << 8) + ((b[offset + 7] & 255) << 0));
-  }
-
-  public static byte[] longToBytes(long l) {
-    byte[] b = new byte[8];
-    b[0] = (byte) (l >>> 56);
-    b[1] = (byte) (l >>> 48);
-    b[2] = (byte) (l >>> 40);
-    b[3] = (byte) (l >>> 32);
-    b[4] = (byte) (l >>> 24);
-    b[5] = (byte) (l >>> 16);
-    b[6] = (byte) (l >>> 8);
-    b[7] = (byte) (l >>> 0);
-    return b;
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/NumArraySummation.java b/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/NumArraySummation.java
deleted file mode 100644
index 66cd2d5..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/NumArraySummation.java
+++ /dev/null
@@ -1,96 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators.aggregation;
-
-import java.io.ByteArrayInputStream;
-import java.io.ByteArrayOutputStream;
-import java.io.DataInputStream;
-import java.io.DataOutputStream;
-import java.io.IOException;
-
-import org.apache.accumulo.core.data.Value;
-import org.apache.hadoop.io.WritableUtils;
-
-/**
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.user.SummingArrayCombiner} with
- *             {@link org.apache.accumulo.core.iterators.user.SummingArrayCombiner.Type#VARLEN}
- */
-@Deprecated
-public class NumArraySummation implements Aggregator {
-  long[] sum = new long[0];
-
-  @Override
-  public Value aggregate() {
-    try {
-      return new Value(NumArraySummation.longArrayToBytes(sum));
-    } catch (IOException e) {
-      throw new RuntimeException(e);
-    }
-  }
-
-  @Override
-  public void collect(Value value) {
-    long[] la;
-    try {
-      la = NumArraySummation.bytesToLongArray(value.get());
-    } catch (IOException e) {
-      throw new RuntimeException(e);
-    }
-
-    if (la.length > sum.length) {
-      for (int i = 0; i < sum.length; i++) {
-        la[i] = NumSummation.safeAdd(la[i], sum[i]);
-      }
-      sum = la;
-    } else {
-      for (int i = 0; i < la.length; i++) {
-        sum[i] = NumSummation.safeAdd(sum[i], la[i]);
-      }
-    }
-  }
-
-  public static byte[] longArrayToBytes(long[] la) throws IOException {
-    ByteArrayOutputStream baos = new ByteArrayOutputStream();
-    DataOutputStream dos = new DataOutputStream(baos);
-
-    WritableUtils.writeVInt(dos, la.length);
-    for (int i = 0; i < la.length; i++) {
-      WritableUtils.writeVLong(dos, la[i]);
-    }
-
-    return baos.toByteArray();
-  }
-
-  public static long[] bytesToLongArray(byte[] b) throws IOException {
-    DataInputStream dis = new DataInputStream(new ByteArrayInputStream(b));
-    int len = WritableUtils.readVInt(dis);
-
-    long[] la = new long[len];
-
-    for (int i = 0; i < len; i++) {
-      la[i] = WritableUtils.readVLong(dis);
-    }
-
-    return la;
-  }
-
-  @Override
-  public void reset() {
-    sum = new long[0];
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/NumSummation.java b/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/NumSummation.java
deleted file mode 100644
index 4d79894..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/NumSummation.java
+++ /dev/null
@@ -1,91 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators.aggregation;
-
-import java.io.ByteArrayInputStream;
-import java.io.ByteArrayOutputStream;
-import java.io.DataInputStream;
-import java.io.DataOutputStream;
-import java.io.IOException;
-
-import org.apache.accumulo.core.data.Value;
-import org.apache.hadoop.io.WritableUtils;
-
-/**
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.user.SummingCombiner} with
- *             {@link org.apache.accumulo.core.iterators.LongCombiner.Type#VARLEN}
- */
-@Deprecated
-public class NumSummation implements Aggregator {
-  long sum = 0l;
-
-  @Override
-  public Value aggregate() {
-    try {
-      return new Value(NumSummation.longToBytes(sum));
-    } catch (IOException e) {
-      throw new RuntimeException(e);
-    }
-  }
-
-  @Override
-  public void collect(Value value) {
-    long l;
-    try {
-      l = NumSummation.bytesToLong(value.get());
-    } catch (IOException e) {
-      throw new RuntimeException(e);
-    }
-
-    sum = NumSummation.safeAdd(sum, l);
-  }
-
-  public static byte[] longToBytes(long l) throws IOException {
-    ByteArrayOutputStream baos = new ByteArrayOutputStream();
-    DataOutputStream dos = new DataOutputStream(baos);
-
-    WritableUtils.writeVLong(dos, l);
-
-    return baos.toByteArray();
-  }
-
-  public static long bytesToLong(byte[] b) throws IOException {
-    DataInputStream dis = new DataInputStream(new ByteArrayInputStream(b));
-    return WritableUtils.readVLong(dis);
-  }
-
-  public static long safeAdd(long a, long b) {
-    long aSign = Long.signum(a);
-    long bSign = Long.signum(b);
-    if ((aSign != 0) && (bSign != 0) && (aSign == bSign)) {
-      if (aSign > 0) {
-        if (Long.MAX_VALUE - a < b)
-          return Long.MAX_VALUE;
-      } else {
-        if (Long.MIN_VALUE - a > b)
-          return Long.MIN_VALUE;
-      }
-    }
-    return a + b;
-  }
-
-  @Override
-  public void reset() {
-    sum = 0l;
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/StringMax.java b/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/StringMax.java
deleted file mode 100644
index 3d4516d..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/StringMax.java
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators.aggregation;
-
-import org.apache.accumulo.core.data.Value;
-
-/**
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.user.MaxCombiner} with
- *             {@link org.apache.accumulo.core.iterators.LongCombiner.Type#STRING}
- */
-@Deprecated
-public class StringMax implements Aggregator {
-
-  long max = Long.MIN_VALUE;
-
-  @Override
-  public Value aggregate() {
-    return new Value(Long.toString(max).getBytes());
-  }
-
-  @Override
-  public void collect(Value value) {
-    long l = Long.parseLong(new String(value.get()));
-    if (l > max) {
-      max = l;
-    }
-  }
-
-  @Override
-  public void reset() {
-    max = Long.MIN_VALUE;
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/StringMin.java b/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/StringMin.java
deleted file mode 100644
index 7a49f81..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/StringMin.java
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators.aggregation;
-
-import org.apache.accumulo.core.data.Value;
-
-/**
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.user.MinCombiner} with
- *             {@link org.apache.accumulo.core.iterators.LongCombiner.Type#STRING}
- */
-@Deprecated
-public class StringMin implements Aggregator {
-
-  long min = Long.MAX_VALUE;
-
-  @Override
-  public Value aggregate() {
-    return new Value(Long.toString(min).getBytes());
-  }
-
-  @Override
-  public void collect(Value value) {
-    long l = Long.parseLong(new String(value.get()));
-    if (l < min) {
-      min = l;
-    }
-  }
-
-  @Override
-  public void reset() {
-    min = Long.MAX_VALUE;
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/StringSummation.java b/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/StringSummation.java
deleted file mode 100644
index a8b5967..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/StringSummation.java
+++ /dev/null
@@ -1,45 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators.aggregation;
-
-import org.apache.accumulo.core.data.Value;
-
-/**
- * @deprecated since 1.4, replaced by {@link org.apache.accumulo.core.iterators.user.SummingCombiner} with
- *             {@link org.apache.accumulo.core.iterators.LongCombiner.Type#STRING}
- */
-@Deprecated
-public class StringSummation implements Aggregator {
-
-  long sum = 0;
-
-  @Override
-  public Value aggregate() {
-    return new Value(Long.toString(sum).getBytes());
-  }
-
-  @Override
-  public void collect(Value value) {
-    sum += Long.parseLong(new String(value.get()));
-  }
-
-  @Override
-  public void reset() {
-    sum = 0;
-
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/conf/AggregatorConfiguration.java b/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/conf/AggregatorConfiguration.java
deleted file mode 100644
index 3432cf5..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/conf/AggregatorConfiguration.java
+++ /dev/null
@@ -1,34 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators.aggregation.conf;
-
-import org.apache.hadoop.io.Text;
-
-/**
- * @deprecated since 1.4
- */
-@Deprecated
-public class AggregatorConfiguration extends org.apache.accumulo.core.iterators.conf.PerColumnIteratorConfig {
-
-  public AggregatorConfiguration(Text columnFamily, String aggClassName) {
-    super(columnFamily, aggClassName);
-  }
-
-  public AggregatorConfiguration(Text columnFamily, Text columnQualifier, String aggClassName) {
-    super(columnFamily, columnQualifier, aggClassName);
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/conf/AggregatorSet.java b/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/conf/AggregatorSet.java
deleted file mode 100644
index d6545ac..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/aggregation/conf/AggregatorSet.java
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators.aggregation.conf;
-
-import java.io.IOException;
-import java.util.Map;
-
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.iterators.conf.ColumnToClassMapping;
-
-/**
- * @deprecated since 1.4
- */
-@Deprecated
-public class AggregatorSet extends ColumnToClassMapping<org.apache.accumulo.core.iterators.aggregation.Aggregator> {
-  public AggregatorSet(Map<String,String> opts) throws InstantiationException, IllegalAccessException, ClassNotFoundException, IOException {
-    super(opts, org.apache.accumulo.core.iterators.aggregation.Aggregator.class);
-  }
-
-  public AggregatorSet() {
-    super();
-  }
-
-  public org.apache.accumulo.core.iterators.aggregation.Aggregator getAggregator(Key k) {
-    return getObject(k);
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/conf/PerColumnIteratorConfig.java b/core/src/main/java/org/apache/accumulo/core/iterators/conf/PerColumnIteratorConfig.java
deleted file mode 100644
index 310776aa..0000000
--- a/core/src/main/java/org/apache/accumulo/core/iterators/conf/PerColumnIteratorConfig.java
+++ /dev/null
@@ -1,81 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators.conf;
-
-import org.apache.hadoop.io.Text;
-
-/**
- * @deprecated since 1.4
- *
- * @see org.apache.accumulo.core.client.IteratorSetting.Column
- * @see org.apache.accumulo.core.iterators.Combiner#setColumns(org.apache.accumulo.core.client.IteratorSetting, java.util.List)
- */
-@Deprecated
-public class PerColumnIteratorConfig {
-
-  private String parameter;
-  private Text colq;
-  private Text colf;
-
-  public PerColumnIteratorConfig(Text columnFamily, String parameter) {
-    this.colf = columnFamily;
-    this.colq = null;
-    this.parameter = parameter;
-  }
-
-  public PerColumnIteratorConfig(Text columnFamily, Text columnQualifier, String parameter) {
-    this.colf = columnFamily;
-    this.colq = columnQualifier;
-    this.parameter = parameter;
-  }
-
-  public Text getColumnFamily() {
-    return colf;
-  }
-
-  public Text getColumnQualifier() {
-    return colq;
-  }
-
-  public String encodeColumns() {
-    return encodeColumns(this);
-  }
-
-  public String getClassName() {
-    return parameter;
-  }
-
-  private static String encodeColumns(PerColumnIteratorConfig pcic) {
-    return ColumnSet.encodeColumns(pcic.colf, pcic.colq);
-  }
-
-  public static String encodeColumns(Text columnFamily, Text columnQualifier) {
-    return ColumnSet.encodeColumns(columnFamily, columnQualifier);
-  }
-
-  public static PerColumnIteratorConfig decodeColumns(String columns, String className) {
-    String[] cols = columns.split(":");
-
-    if (cols.length == 1) {
-      return new PerColumnIteratorConfig(ColumnSet.decode(cols[0]), className);
-    } else if (cols.length == 2) {
-      return new PerColumnIteratorConfig(ColumnSet.decode(cols[0]), ColumnSet.decode(cols[1]), className);
-    } else {
-      throw new IllegalArgumentException(columns);
-    }
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/IntersectingIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/IntersectingIterator.java
index e7338f3..f848b10 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/IntersectingIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/IntersectingIterator.java
@@ -16,9 +16,8 @@
  */
 package org.apache.accumulo.core.iterators.user;
 
-import static java.nio.charset.StandardCharsets.UTF_8;
-
 import java.io.IOException;
+import java.util.Base64;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.Map;
@@ -32,7 +31,6 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.util.TextUtil;
 import org.apache.hadoop.io.Text;
 
@@ -390,7 +388,7 @@
   protected static String encodeColumns(Text[] columns) {
     StringBuilder sb = new StringBuilder();
     for (int i = 0; i < columns.length; i++) {
-      sb.append(Base64.encodeBase64String(TextUtil.getBytes(columns[i])));
+      sb.append(Base64.getEncoder().encodeToString(TextUtil.getBytes(columns[i])));
       sb.append('\n');
     }
     return sb.toString();
@@ -407,14 +405,14 @@
       else
         bytes[i] = 0;
     }
-    return Base64.encodeBase64String(bytes);
+    return Base64.getEncoder().encodeToString(bytes);
   }
 
   protected static Text[] decodeColumns(String columns) {
     String[] columnStrings = columns.split("\n");
     Text[] columnTexts = new Text[columnStrings.length];
     for (int i = 0; i < columnStrings.length; i++) {
-      columnTexts[i] = new Text(Base64.decodeBase64(columnStrings[i].getBytes(UTF_8)));
+      columnTexts[i] = new Text(Base64.getDecoder().decode(columnStrings[i]));
     }
     return columnTexts;
   }
@@ -427,7 +425,7 @@
     if (flags == null)
       return null;
 
-    byte[] bytes = Base64.decodeBase64(flags.getBytes(UTF_8));
+    byte[] bytes = Base64.getDecoder().decode(flags);
     boolean[] bFlags = new boolean[bytes.length];
     for (int i = 0; i < bytes.length; i++) {
       if (bytes[i] == 1)
@@ -505,30 +503,6 @@
   }
 
   /**
-   * @deprecated since 1.6.0
-   */
-  @Deprecated
-  public void addSource(SortedKeyValueIterator<Key,Value> source, IteratorEnvironment env, Text term, boolean notFlag) {
-    // Check if we have space for the added Source
-    if (sources == null) {
-      sources = new TermSource[1];
-    } else {
-      // allocate space for node, and copy current tree.
-      // TODO: Should we change this to an ArrayList so that we can just add() ? - ACCUMULO-1309
-      TermSource[] localSources = new TermSource[sources.length + 1];
-      int currSource = 0;
-      for (TermSource myTerm : sources) {
-        // TODO: Do I need to call new here? or can I just re-use the term? - ACCUMULO-1309
-        localSources[currSource] = new TermSource(myTerm);
-        currSource++;
-      }
-      sources = localSources;
-    }
-    sources[sourcesCount] = new TermSource(source.deepCopy(env), term, notFlag);
-    sourcesCount++;
-  }
-
-  /**
    * Encode the columns to be used when iterating.
    */
   public static void setColumnFamilies(IteratorSetting cfg, Text[] columns) {
diff --git a/core/src/main/java/org/apache/accumulo/core/replication/ReplicationTable.java b/core/src/main/java/org/apache/accumulo/core/replication/ReplicationTable.java
index 7076757..2479051 100644
--- a/core/src/main/java/org/apache/accumulo/core/replication/ReplicationTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/replication/ReplicationTable.java
@@ -35,7 +35,6 @@
 import org.apache.accumulo.core.replication.ReplicationSchema.StatusSection;
 import org.apache.accumulo.core.replication.ReplicationSchema.WorkSection;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -87,7 +86,7 @@
   }
 
   public static boolean isOnline(Connector conn) {
-    return DeprecationUtil.isMockInstance(conn.getInstance()) || TableState.ONLINE == Tables.getTableState(conn.getInstance(), ID);
+    return TableState.ONLINE == Tables.getTableState(conn.getInstance(), ID);
   }
 
   public static void setOnline(Connector conn) throws AccumuloSecurityException, AccumuloException {
diff --git a/core/src/main/java/org/apache/accumulo/core/rpc/SaslDigestCallbackHandler.java b/core/src/main/java/org/apache/accumulo/core/rpc/SaslDigestCallbackHandler.java
index 901bec1..42914eb 100644
--- a/core/src/main/java/org/apache/accumulo/core/rpc/SaslDigestCallbackHandler.java
+++ b/core/src/main/java/org/apache/accumulo/core/rpc/SaslDigestCallbackHandler.java
@@ -16,9 +16,10 @@
  */
 package org.apache.accumulo.core.rpc;
 
+import java.util.Base64;
+
 import javax.security.auth.callback.CallbackHandler;
 
-import org.apache.commons.codec.binary.Base64;
 import org.apache.hadoop.security.token.SecretManager;
 import org.apache.hadoop.security.token.SecretManager.InvalidToken;
 import org.apache.hadoop.security.token.TokenIdentifier;
@@ -36,7 +37,7 @@
    * @see #decodeIdentifier(String)
    */
   public String encodeIdentifier(byte[] identifier) {
-    return new String(Base64.encodeBase64(identifier));
+    return Base64.getEncoder().encodeToString(identifier);
   }
 
   /**
@@ -47,7 +48,7 @@
    * @see #getPassword(SecretManager, TokenIdentifier)
    */
   public char[] encodePassword(byte[] password) {
-    return new String(Base64.encodeBase64(password)).toCharArray();
+    return Base64.getEncoder().encodeToString(password).toCharArray();
   }
 
   /**
@@ -71,7 +72,7 @@
    * @see #encodeIdentifier(byte[])
    */
   public byte[] decodeIdentifier(String identifier) {
-    return Base64.decodeBase64(identifier.getBytes());
+    return Base64.getDecoder().decode(identifier);
   }
 
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/rpc/TTimeoutTransport.java b/core/src/main/java/org/apache/accumulo/core/rpc/TTimeoutTransport.java
index cc3f51b..809975f 100644
--- a/core/src/main/java/org/apache/accumulo/core/rpc/TTimeoutTransport.java
+++ b/core/src/main/java/org/apache/accumulo/core/rpc/TTimeoutTransport.java
@@ -30,16 +30,31 @@
 import org.apache.hadoop.net.NetUtils;
 import org.apache.thrift.transport.TIOStreamTransport;
 import org.apache.thrift.transport.TTransport;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import com.google.common.net.HostAndPort;
 
+/**
+ * A utility class for setting up a {@link TTransport} with various necessary configurations for ideal performance in Accumulo. These configurations include:
+ * <ul>
+ * <li>Setting SO_LINGER=false on the socket.</li>
+ * <li>Setting TCP_NO_DELAY=true on the socket.</li>
+ * <li>Setting timeouts on the I/OStreams.</li>
+ * </ul>
+ */
 public class TTimeoutTransport {
+  private static final Logger log = LoggerFactory.getLogger(TTimeoutTransport.class);
 
-  private static volatile Method GET_INPUT_STREAM_METHOD = null;
+  private static final TTimeoutTransport INSTANCE = new TTimeoutTransport();
 
-  private static Method getNetUtilsInputStreamMethod() {
+  private volatile Method GET_INPUT_STREAM_METHOD = null;
+
+  private TTimeoutTransport() {}
+
+  private Method getNetUtilsInputStreamMethod() {
     if (null == GET_INPUT_STREAM_METHOD) {
-      synchronized (TTimeoutTransport.class) {
+      synchronized (this) {
         if (null == GET_INPUT_STREAM_METHOD) {
           try {
             GET_INPUT_STREAM_METHOD = NetUtils.class.getMethod("getInputStream", Socket.class, Long.TYPE);
@@ -53,35 +68,144 @@
     return GET_INPUT_STREAM_METHOD;
   }
 
-  private static InputStream getInputStream(Socket socket, long timeout) {
+  /**
+   * Invokes the <code>NetUtils.getInputStream(Socket, long)</code> using reflection to handle compatibility with both Hadoop 1 and 2.
+   *
+   * @param socket
+   *          The socket to create the input stream on
+   * @param timeout
+   *          The timeout for the input stream in milliseconds
+   * @return An InputStream on the socket
+   */
+  private InputStream getInputStream(Socket socket, long timeout) throws IOException {
     try {
       return (InputStream) getNetUtilsInputStreamMethod().invoke(null, socket, timeout);
     } catch (Exception e) {
-      throw new RuntimeException(e);
+      Throwable cause = e.getCause();
+      // Try to re-throw the IOException directly
+      if (cause instanceof IOException) {
+        throw (IOException) cause;
+      }
+
+      if (e instanceof RuntimeException) {
+        // Don't re-wrap another RTE around an RTE
+        throw (RuntimeException) e;
+      } else {
+        throw new RuntimeException(e);
+      }
     }
   }
 
+  /**
+   * Creates a Thrift TTransport to the given address with the given timeout. All created resources are closed if an exception is thrown.
+   *
+   * @param addr
+   *          The address to connect the client to
+   * @param timeoutMillis
+   *          The timeout in milliseconds for the connection
+   * @return A TTransport connected to the given <code>addr</code>
+   * @throws IOException
+   *           If the transport fails to be created/connected
+   */
   public static TTransport create(HostAndPort addr, long timeoutMillis) throws IOException {
-    return create(new InetSocketAddress(addr.getHostText(), addr.getPort()), timeoutMillis);
+    return INSTANCE.createInternal(new InetSocketAddress(addr.getHostText(), addr.getPort()), timeoutMillis);
   }
 
+  /**
+   * Creates a Thrift TTransport to the given address with the given timeout. All created resources are closed if an exception is thrown.
+   *
+   * @param addr
+   *          The address to connect the client to
+   * @param timeoutMillis
+   *          The timeout in milliseconds for the connection
+   * @return A TTransport connected to the given <code>addr</code>
+   * @throws IOException
+   *           If the transport fails to be created/connected
+   */
   public static TTransport create(SocketAddress addr, long timeoutMillis) throws IOException {
+    return INSTANCE.createInternal(addr, timeoutMillis);
+  }
+
+  /**
+   * Opens a socket to the given <code>addr</code>, configures the socket, and then creates a Thrift transport using the socket.
+   *
+   * @param addr
+   *          The address the socket should connect
+   * @param timeoutMillis
+   *          The socket timeout in milliseconds
+   * @return A TTransport instance to the given <code>addr</code>
+   * @throws IOException
+   *           If the Thrift client is failed to be connected/created
+   */
+  protected TTransport createInternal(SocketAddress addr, long timeoutMillis) throws IOException {
     Socket socket = null;
     try {
-      socket = SelectorProvider.provider().openSocketChannel().socket();
+      socket = openSocket(addr);
+    } catch (IOException e) {
+      // openSocket handles closing the Socket on error
+      throw e;
+    }
+
+    // Should be non-null
+    assert null != socket;
+
+    // Set up the streams
+    try {
+      InputStream input = wrapInputStream(socket, timeoutMillis);
+      OutputStream output = wrapOutputStream(socket, timeoutMillis);
+      return new TIOStreamTransport(input, output);
+    } catch (IOException e) {
+      try {
+        socket.close();
+      } catch (IOException ioe) {
+        log.error("Failed to close socket after unsuccessful I/O stream setup", e);
+      }
+
+      throw e;
+    }
+  }
+
+  // Visible for testing
+  protected InputStream wrapInputStream(Socket socket, long timeoutMillis) throws IOException {
+    return new BufferedInputStream(getInputStream(socket, timeoutMillis), 1024 * 10);
+  }
+
+  // Visible for testing
+  protected OutputStream wrapOutputStream(Socket socket, long timeoutMillis) throws IOException {
+    return new BufferedOutputStream(NetUtils.getOutputStream(socket, timeoutMillis), 1024 * 10);
+  }
+
+  /**
+   * Opens and configures a {@link Socket} for Accumulo RPC.
+   *
+   * @param addr
+   *          The address to connect the socket to
+   * @return A socket connected to the given address, or null if the socket fails to connect
+   */
+  protected Socket openSocket(SocketAddress addr) throws IOException {
+    Socket socket = null;
+    try {
+      socket = openSocketChannel();
       socket.setSoLinger(false, 0);
       socket.setTcpNoDelay(true);
       socket.connect(addr);
-      InputStream input = new BufferedInputStream(getInputStream(socket, timeoutMillis), 1024 * 10);
-      OutputStream output = new BufferedOutputStream(NetUtils.getOutputStream(socket, timeoutMillis), 1024 * 10);
-      return new TIOStreamTransport(input, output);
+      return socket;
     } catch (IOException e) {
       try {
         if (socket != null)
           socket.close();
-      } catch (IOException ioe) {}
+      } catch (IOException ioe) {
+        log.error("Failed to close socket after unsuccessful open.", e);
+      }
 
       throw e;
     }
   }
+
+  /**
+   * Opens a socket channel and returns the underlying socket.
+   */
+  protected Socket openSocketChannel() throws IOException {
+    return SelectorProvider.provider().openSocketChannel().socket();
+  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/security/Authorizations.java b/core/src/main/java/org/apache/accumulo/core/security/Authorizations.java
index c725d9b..55a961f 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/Authorizations.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/Authorizations.java
@@ -22,6 +22,7 @@
 import java.io.Serializable;
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
+import java.util.Base64;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.HashSet;
@@ -32,7 +33,6 @@
 
 import org.apache.accumulo.core.data.ArrayByteSequence;
 import org.apache.accumulo.core.data.ByteSequence;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.util.ByteBufferUtil;
 
 /**
@@ -153,7 +153,7 @@
       authsString = authsString.substring(HEADER.length());
       if (authsString.length() > 0) {
         for (String encAuth : authsString.split(",")) {
-          byte[] auth = Base64.decodeBase64(encAuth.getBytes(UTF_8));
+          byte[] auth = Base64.getDecoder().decode(encAuth);
           auths.add(new ArrayByteSequence(auth));
         }
         checkAuths();
@@ -340,7 +340,7 @@
     for (byte[] auth : authsList) {
       sb.append(sep);
       sep = ",";
-      sb.append(Base64.encodeBase64String(auth));
+      sb.append(Base64.getEncoder().encodeToString(auth));
     }
 
     return sb.toString();
diff --git a/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java b/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java
deleted file mode 100644
index 611c8d4..0000000
--- a/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java
+++ /dev/null
@@ -1,29 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.accumulo.core.security;
-
-/**
- *
- * @deprecated since 1.7.0 This is server side code not intended to exist in a public API package. This class references types that are not in the public API
- *             and therefore is not guaranteed to be stable. It was deprecated to clearly communicate this. Use
- *             {@link org.apache.accumulo.core.constraints.VisibilityConstraint} instead.
- */
-@Deprecated
-public class VisibilityConstraint extends org.apache.accumulo.core.constraints.VisibilityConstraint {
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/security/crypto/CachingHDFSSecretKeyEncryptionStrategy.java b/core/src/main/java/org/apache/accumulo/core/security/crypto/CachingHDFSSecretKeyEncryptionStrategy.java
index 7b79d99..9be89db 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/crypto/CachingHDFSSecretKeyEncryptionStrategy.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/crypto/CachingHDFSSecretKeyEncryptionStrategy.java
@@ -41,6 +41,8 @@
  */
 public class CachingHDFSSecretKeyEncryptionStrategy implements SecretKeyEncryptionStrategy {
 
+  @SuppressWarnings("deprecation")
+  private static final Property INSTANCE_DFS_DIR = Property.INSTANCE_DFS_DIR;
   private static final Logger log = LoggerFactory.getLogger(CachingHDFSSecretKeyEncryptionStrategy.class);
   private SecretKeyCache secretKeyCache = new SecretKeyCache();
 
@@ -173,17 +175,16 @@
 
     }
 
-    @SuppressWarnings("deprecation")
     private String getFullPathToKey(CryptoModuleParameters params) {
       String pathToKeyName = params.getAllOptions().get(Property.CRYPTO_DEFAULT_KEY_STRATEGY_KEY_LOCATION.getKey());
-      String instanceDirectory = params.getAllOptions().get(Property.INSTANCE_DFS_DIR.getKey());
+      String instanceDirectory = params.getAllOptions().get(INSTANCE_DFS_DIR.getKey());
 
       if (pathToKeyName == null) {
         pathToKeyName = Property.CRYPTO_DEFAULT_KEY_STRATEGY_KEY_LOCATION.getDefaultValue();
       }
 
       if (instanceDirectory == null) {
-        instanceDirectory = Property.INSTANCE_DFS_DIR.getDefaultValue();
+        instanceDirectory = INSTANCE_DFS_DIR.getDefaultValue();
       }
 
       if (!pathToKeyName.startsWith("/")) {
diff --git a/core/src/main/java/org/apache/accumulo/core/security/crypto/NonCachingSecretKeyEncryptionStrategy.java b/core/src/main/java/org/apache/accumulo/core/security/crypto/NonCachingSecretKeyEncryptionStrategy.java
index 1dd8d60..f0eaa26 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/crypto/NonCachingSecretKeyEncryptionStrategy.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/crypto/NonCachingSecretKeyEncryptionStrategy.java
@@ -39,6 +39,10 @@
 //TODO ACCUMULO-2530 Update properties to use a URI instead of a relative path to secret key
 public class NonCachingSecretKeyEncryptionStrategy implements SecretKeyEncryptionStrategy {
 
+  @SuppressWarnings("deprecation")
+  private static final Property INSTANCE_DFS_DIR = Property.INSTANCE_DFS_DIR;
+  @SuppressWarnings("deprecation")
+  private static final Property INSTANCE_DFS_URI = Property.INSTANCE_DFS_URI;
   private static final Logger log = LoggerFactory.getLogger(NonCachingSecretKeyEncryptionStrategy.class);
 
   private void doKeyEncryptionOperation(int encryptionMode, CryptoModuleParameters params, String pathToKeyName, Path pathToKey, FileSystem fs)
@@ -121,17 +125,16 @@
     }
   }
 
-  @SuppressWarnings("deprecation")
   private String getFullPathToKey(CryptoModuleParameters params) {
     String pathToKeyName = params.getAllOptions().get(Property.CRYPTO_DEFAULT_KEY_STRATEGY_KEY_LOCATION.getKey());
-    String instanceDirectory = params.getAllOptions().get(Property.INSTANCE_DFS_DIR.getKey());
+    String instanceDirectory = params.getAllOptions().get(INSTANCE_DFS_DIR.getKey());
 
     if (pathToKeyName == null) {
       pathToKeyName = Property.CRYPTO_DEFAULT_KEY_STRATEGY_KEY_LOCATION.getDefaultValue();
     }
 
     if (instanceDirectory == null) {
-      instanceDirectory = Property.INSTANCE_DFS_DIR.getDefaultValue();
+      instanceDirectory = INSTANCE_DFS_DIR.getDefaultValue();
     }
 
     if (!pathToKeyName.startsWith("/")) {
@@ -142,12 +145,11 @@
     return fullPath;
   }
 
-  @SuppressWarnings("deprecation")
   @Override
   public CryptoModuleParameters encryptSecretKey(CryptoModuleParameters params) {
-    String hdfsURI = params.getAllOptions().get(Property.INSTANCE_DFS_URI.getKey());
+    String hdfsURI = params.getAllOptions().get(INSTANCE_DFS_URI.getKey());
     if (hdfsURI == null) {
-      hdfsURI = Property.INSTANCE_DFS_URI.getDefaultValue();
+      hdfsURI = INSTANCE_DFS_URI.getDefaultValue();
     }
 
     String fullPath = getFullPathToKey(params);
@@ -166,12 +168,11 @@
     return params;
   }
 
-  @SuppressWarnings("deprecation")
   @Override
   public CryptoModuleParameters decryptSecretKey(CryptoModuleParameters params) {
-    String hdfsURI = params.getAllOptions().get(Property.INSTANCE_DFS_URI.getKey());
+    String hdfsURI = params.getAllOptions().get(INSTANCE_DFS_URI.getKey());
     if (hdfsURI == null) {
-      hdfsURI = Property.INSTANCE_DFS_URI.getDefaultValue();
+      hdfsURI = INSTANCE_DFS_URI.getDefaultValue();
     }
 
     String pathToKeyName = getFullPathToKey(params);
diff --git a/core/src/main/java/org/apache/accumulo/core/trace/DistributedTrace.java b/core/src/main/java/org/apache/accumulo/core/trace/DistributedTrace.java
index f833b11..bb8a683 100644
--- a/core/src/main/java/org/apache/accumulo/core/trace/DistributedTrace.java
+++ b/core/src/main/java/org/apache/accumulo/core/trace/DistributedTrace.java
@@ -23,16 +23,13 @@
 
 import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
-import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.fate.zookeeper.ZooReader;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.util.ShutdownHookManager;
 import org.apache.htrace.HTraceConfiguration;
 import org.apache.htrace.SpanReceiver;
 import org.apache.htrace.SpanReceiverBuilder;
-import org.apache.zookeeper.KeeperException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -55,14 +52,6 @@
   private static final HashSet<SpanReceiver> receivers = new HashSet<>();
 
   /**
-   * @deprecated since 1.7, use {@link DistributedTrace#enable(String, String, org.apache.accumulo.core.client.ClientConfiguration)} instead
-   */
-  @Deprecated
-  public static void enable(Instance instance, ZooReader zoo, String application, String address) throws IOException, KeeperException, InterruptedException {
-    enable(address, application);
-  }
-
-  /**
    * Enable tracing by setting up SpanReceivers for the current process.
    */
   public static void enable() {
diff --git a/core/src/main/java/org/apache/accumulo/core/trace/Trace.java b/core/src/main/java/org/apache/accumulo/core/trace/Trace.java
index 3ebd031..35227c5 100644
--- a/core/src/main/java/org/apache/accumulo/core/trace/Trace.java
+++ b/core/src/main/java/org/apache/accumulo/core/trace/Trace.java
@@ -56,14 +56,6 @@
   }
 
   /**
-   * @deprecated since 1.7, use {@link #off()} instead
-   */
-  @Deprecated
-  public static void offNoFlush() {
-    off();
-  }
-
-  /**
    * Returns whether tracing is currently on.
    */
   public static boolean isTracing() {
@@ -71,16 +63,6 @@
   }
 
   /**
-   * Return the current span.
-   *
-   * @deprecated since 1.7 -- it is better to save the span you create in a local variable and call its methods, rather than retrieving the current span
-   */
-  @Deprecated
-  public static Span currentTrace() {
-    return new Span(org.apache.htrace.Trace.currentSpan());
-  }
-
-  /**
    * Get the trace id of the current span.
    */
   public static long currentTraceId() {
diff --git a/core/src/main/java/org/apache/accumulo/core/util/Base64.java b/core/src/main/java/org/apache/accumulo/core/util/Base64.java
deleted file mode 100644
index ce54aae..0000000
--- a/core/src/main/java/org/apache/accumulo/core/util/Base64.java
+++ /dev/null
@@ -1,73 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.util;
-
-import org.apache.commons.codec.binary.StringUtils;
-
-/**
- * A wrapper around commons-codec's Base64 to make sure we get the non-chunked behavior that became the default in commons-codec version 1.5+ while relying on
- * the commons-codec version 1.4 that Hadoop Client provides.
- */
-public final class Base64 {
-
-  /**
-   * Private to prevent instantiation.
-   */
-  private Base64() {}
-
-  /**
-   * Serialize to Base64 byte array, non-chunked.
-   */
-  public static byte[] encodeBase64(byte[] data) {
-    return org.apache.commons.codec.binary.Base64.encodeBase64(data, false);
-  }
-
-  /**
-   * Serialize to Base64 as a String, non-chunked.
-   */
-  public static String encodeBase64String(byte[] data) {
-    /* Based on implementation of this same name function in commons-codec 1.5+. in commons-codec 1.4, the second param sets chunking to true. */
-    return StringUtils.newStringUtf8(org.apache.commons.codec.binary.Base64.encodeBase64(data, false));
-  }
-
-  /**
-   * Serialize to Base64 as a String using the URLSafe alphabet, non-chunked.
-   *
-   * The URLSafe alphabet uses - instead of + and _ instead of /.
-   */
-  public static String encodeBase64URLSafeString(byte[] data) {
-    return org.apache.commons.codec.binary.Base64.encodeBase64URLSafeString(data);
-  }
-
-  /**
-   * Decode, presuming bytes are base64.
-   *
-   * Transparently handles either the standard alphabet or the URL Safe one.
-   */
-  public static byte[] decodeBase64(byte[] base64) {
-    return org.apache.commons.codec.binary.Base64.decodeBase64(base64);
-  }
-
-  /**
-   * Decode, presuming String is base64.
-   *
-   * Transparently handles either the standard alphabet or the URL Safe one.
-   */
-  public static byte[] decodeBase64(String base64String) {
-    return org.apache.commons.codec.binary.Base64.decodeBase64(base64String);
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/util/CreateToken.java b/core/src/main/java/org/apache/accumulo/core/util/CreateToken.java
index 452c411..b63cdfd 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/CreateToken.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/CreateToken.java
@@ -22,8 +22,7 @@
 import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.PrintStream;
-
-import jline.console.ConsoleReader;
+import java.util.Base64;
 
 import org.apache.accumulo.core.cli.ClientOpts.Password;
 import org.apache.accumulo.core.cli.ClientOpts.PasswordConverter;
@@ -38,6 +37,8 @@
 import com.beust.jcommander.Parameter;
 import com.google.auto.service.AutoService;
 
+import jline.console.ConsoleReader;
+
 @AutoService(KeywordExecutable.class)
 public class CreateToken implements KeywordExecutable {
 
@@ -108,7 +109,7 @@
         props.put(tp.getKey(), input);
         token.init(props);
       }
-      String tokenBase64 = Base64.encodeBase64String(AuthenticationTokenSerializer.serialize(token));
+      String tokenBase64 = Base64.getEncoder().encodeToString(AuthenticationTokenSerializer.serialize(token));
 
       String tokenFile = opts.tokenFile;
       if (tokenFile == null) {
diff --git a/core/src/main/java/org/apache/accumulo/core/util/DeprecationUtil.java b/core/src/main/java/org/apache/accumulo/core/util/DeprecationUtil.java
deleted file mode 100644
index cd798bb..0000000
--- a/core/src/main/java/org/apache/accumulo/core/util/DeprecationUtil.java
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.util;
-
-import org.apache.accumulo.core.client.Instance;
-import org.apache.accumulo.core.client.impl.TabletLocator;
-import org.apache.accumulo.core.client.mapreduce.RangeInputSplit;
-
-/**
- * A utility class for managing deprecated items. This avoids scattering private helper methods all over the code with warnings suppression.
- *
- * <p>
- * This class will never be public API and methods will be removed as soon as they are no longer needed. No methods in this class will, themselves, be
- * deprecated, because that would propagate the deprecation warning we are trying to avoid.
- *
- * <p>
- * This class should not be used as a substitute for deprecated classes. It should <b>only</b> be used for implementation code which must remain to support the
- * deprecated features, and <b>only</b> until that feature is removed.
- */
-public class DeprecationUtil {
-
-  @SuppressWarnings("deprecation")
-  public static boolean isMockInstance(Instance instance) {
-    return instance instanceof org.apache.accumulo.core.client.mock.MockInstance;
-  }
-
-  @SuppressWarnings("deprecation")
-  public static Instance makeMockInstance(String instance) {
-    return new org.apache.accumulo.core.client.mock.MockInstance(instance);
-  }
-
-  @SuppressWarnings("deprecation")
-  public static void setMockInstance(RangeInputSplit split, boolean isMockInstance) {
-    split.setMockInstance(isMockInstance);
-  }
-
-  @SuppressWarnings("deprecation")
-  public static boolean isMockInstanceSet(RangeInputSplit split) {
-    return split.isMockInstance();
-  }
-
-  @SuppressWarnings("deprecation")
-  public static TabletLocator makeMockLocator() {
-    return new org.apache.accumulo.core.client.mock.impl.MockTabletLocator();
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/util/Encoding.java b/core/src/main/java/org/apache/accumulo/core/util/Encoding.java
index 524f377..c326136 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/Encoding.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/Encoding.java
@@ -16,14 +16,14 @@
  */
 package org.apache.accumulo.core.util;
 
-import static java.nio.charset.StandardCharsets.UTF_8;
+import java.util.Base64;
 
 import org.apache.hadoop.io.Text;
 
 public class Encoding {
 
   public static String encodeAsBase64FileName(Text data) {
-    String encodedRow = Base64.encodeBase64URLSafeString(TextUtil.getBytes(data));
+    String encodedRow = Base64.getUrlEncoder().encodeToString(TextUtil.getBytes(data));
 
     int index = encodedRow.length() - 1;
     while (index >= 0 && encodedRow.charAt(index) == '=')
@@ -34,10 +34,7 @@
   }
 
   public static byte[] decodeBase64FileName(String node) {
-    while (node.length() % 4 != 0)
-      node += "=";
-    /* decode transparently handles URLSafe encodings */
-    return Base64.decodeBase64(node.getBytes(UTF_8));
+    return Base64.getUrlDecoder().decode(node);
   }
 
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/Pair.java b/core/src/main/java/org/apache/accumulo/core/util/Pair.java
index 2d51bcd..37fb04f 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/Pair.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/Pair.java
@@ -81,7 +81,7 @@
   }
 
   public static <K2,V2,K1 extends K2,V1 extends V2> Pair<K2,V2> fromEntry(Entry<K1,V1> entry) {
-    return new Pair<K2,V2>(entry.getKey(), entry.getValue());
+    return new Pair<>(entry.getKey(), entry.getValue());
   }
 
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/Validator.java b/core/src/main/java/org/apache/accumulo/core/util/Validator.java
index c1e3c80..06b0e2b 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/Validator.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/Validator.java
@@ -16,10 +16,10 @@
  */
 package org.apache.accumulo.core.util;
 
-import com.google.common.base.Predicate;
+import java.util.function.Predicate;
 
 /**
- * A class that validates arguments of a particular type. Implementations must implement {@link #apply(Object)} and should override
+ * A class that validates arguments of a particular type. Implementations must implement {@link #test(Object)} and should override
  * {@link #invalidMessage(Object)}.
  */
 public abstract class Validator<T> implements Predicate<T> {
@@ -34,7 +34,7 @@
    *           if validation fails
    */
   public final T validate(final T argument) {
-    if (!apply(argument))
+    if (!test(argument))
       throw new IllegalArgumentException(invalidMessage(argument));
     return argument;
   }
@@ -65,13 +65,13 @@
     return new Validator<T>() {
 
       @Override
-      public boolean apply(T argument) {
-        return mine.apply(argument) && other.apply(argument);
+      public boolean test(T argument) {
+        return mine.test(argument) && other.test(argument);
       }
 
       @Override
       public String invalidMessage(T argument) {
-        return (mine.apply(argument) ? other : mine).invalidMessage(argument);
+        return (mine.test(argument) ? other : mine).invalidMessage(argument);
       }
 
     };
@@ -92,8 +92,8 @@
     return new Validator<T>() {
 
       @Override
-      public boolean apply(T argument) {
-        return mine.apply(argument) || other.apply(argument);
+      public boolean test(T argument) {
+        return mine.test(argument) || other.test(argument);
       }
 
       @Override
@@ -114,8 +114,8 @@
     return new Validator<T>() {
 
       @Override
-      public boolean apply(T argument) {
-        return !mine.apply(argument);
+      public boolean test(T argument) {
+        return !mine.test(argument);
       }
 
       @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/BinaryFormatter.java b/core/src/main/java/org/apache/accumulo/core/util/format/BinaryFormatter.java
deleted file mode 100644
index f5cbe39..0000000
--- a/core/src/main/java/org/apache/accumulo/core/util/format/BinaryFormatter.java
+++ /dev/null
@@ -1,66 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.util.format;
-
-import java.util.Map.Entry;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.security.ColumnVisibility;
-
-/**
- * @deprecated Use {@link DefaultFormatter} providing showLength and printTimestamps via {@link FormatterConfig}.
- */
-@Deprecated
-public class BinaryFormatter extends DefaultFormatter {
-  // this class can probably be replaced by DefaultFormatter since DefaultFormatter has the max length stuff
-  @Override
-  public String next() {
-    checkState(true);
-    return formatEntry(getScannerIterator().next(), config.willPrintTimestamps(), config.getShownLength());
-  }
-
-  public static String formatEntry(Entry<Key,Value> entry, boolean printTimestamps, int shownLength) {
-    StringBuilder sb = new StringBuilder();
-
-    Key key = entry.getKey();
-
-    // append row
-    appendText(sb, key.getRow(), shownLength).append(" ");
-
-    // append column family
-    appendText(sb, key.getColumnFamily(), shownLength).append(":");
-
-    // append column qualifier
-    appendText(sb, key.getColumnQualifier(), shownLength).append(" ");
-
-    // append visibility expression
-    sb.append(new ColumnVisibility(key.getColumnVisibility()));
-
-    // append timestamp
-    if (printTimestamps)
-      sb.append(" ").append(entry.getKey().getTimestamp());
-
-    // append value
-    Value value = entry.getValue();
-    if (value != null && value.getSize() > 0) {
-      sb.append("\t");
-      appendValue(sb, value, shownLength);
-    }
-    return sb.toString();
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/DateFormatSupplier.java b/core/src/main/java/org/apache/accumulo/core/util/format/DateFormatSupplier.java
index 9cf50e0..efe0190 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/format/DateFormatSupplier.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/format/DateFormatSupplier.java
@@ -19,8 +19,7 @@
 import java.text.DateFormat;
 import java.text.SimpleDateFormat;
 import java.util.TimeZone;
-
-import com.google.common.base.Supplier;
+import java.util.function.Supplier;
 
 /**
  * DateFormatSupplier is a {@code ThreadLocal<DateFormat>} that will set the correct TimeZone when the object is retrieved by {@link #get()}.
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/DateStringFormatter.java b/core/src/main/java/org/apache/accumulo/core/util/format/DateStringFormatter.java
deleted file mode 100644
index 63bd536..0000000
--- a/core/src/main/java/org/apache/accumulo/core/util/format/DateStringFormatter.java
+++ /dev/null
@@ -1,75 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.util.format;
-
-import java.util.Map.Entry;
-import java.util.TimeZone;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Value;
-
-/**
- * This class is <strong>not</strong> recommended because {@link #initialize(Iterable, FormatterConfig)} replaces parameters in {@link FormatterConfig}, which
- * could surprise users.
- *
- * This class can be replaced by {@link DefaultFormatter} where FormatterConfig is initialized with a DateFormat set to {@link #DATE_FORMAT}. See
- * {@link DateFormatSupplier#createSimpleFormatSupplier(String, java.util.TimeZone)}.
- *
- * <pre>
- * final DateFormatSupplier dfSupplier = DateFormatSupplier.createSimpleFormatSupplier(DateFormatSupplier.HUMAN_READABLE_FORMAT, TimeZone.getTimeZone(&quot;UTC&quot;));
- * final FormatterConfig config = new FormatterConfig().setPrintTimestamps(true).setDateFormatSupplier(dfSupplier);
- * </pre>
- */
-@Deprecated
-public class DateStringFormatter implements Formatter {
-
-  private DefaultFormatter defaultFormatter;
-  private TimeZone timeZone;
-
-  public static final String DATE_FORMAT = DateFormatSupplier.HUMAN_READABLE_FORMAT;
-
-  public DateStringFormatter() {
-    this(TimeZone.getDefault());
-  }
-
-  public DateStringFormatter(TimeZone timeZone) {
-    this.defaultFormatter = new DefaultFormatter();
-    this.timeZone = timeZone;
-  }
-
-  @Override
-  public void initialize(Iterable<Entry<Key,Value>> scanner, FormatterConfig config) {
-    FormatterConfig newConfig = new FormatterConfig(config);
-    newConfig.setDateFormatSupplier(DateFormatSupplier.createSimpleFormatSupplier(DATE_FORMAT, timeZone));
-    defaultFormatter.initialize(scanner, newConfig);
-  }
-
-  @Override
-  public boolean hasNext() {
-    return defaultFormatter.hasNext();
-  }
-
-  @Override
-  public String next() {
-    return defaultFormatter.next();
-  }
-
-  @Override
-  public void remove() {
-    defaultFormatter.remove();
-  }
-
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/FormatterConfig.java b/core/src/main/java/org/apache/accumulo/core/util/format/FormatterConfig.java
index 0cd5139..65bbf8e 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/format/FormatterConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/format/FormatterConfig.java
@@ -23,8 +23,7 @@
 import java.text.ParsePosition;
 import java.text.SimpleDateFormat;
 import java.util.Date;
-
-import com.google.common.base.Supplier;
+import java.util.function.Supplier;
 
 /**
  * Holds configuration settings for a {@link Formatter}
diff --git a/core/src/main/java/org/apache/accumulo/core/volume/VolumeConfiguration.java b/core/src/main/java/org/apache/accumulo/core/volume/VolumeConfiguration.java
index 573978d..31cf53c 100644
--- a/core/src/main/java/org/apache/accumulo/core/volume/VolumeConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/volume/VolumeConfiguration.java
@@ -31,6 +31,11 @@
 
 public class VolumeConfiguration {
 
+  @SuppressWarnings("deprecation")
+  private static final Property INSTANCE_DFS_DIR = Property.INSTANCE_DFS_DIR;
+  @SuppressWarnings("deprecation")
+  private static final Property INSTANCE_DFS_URI = Property.INSTANCE_DFS_URI;
+
   public static Volume getVolume(String path, Configuration conf, AccumuloConfiguration acuconf) throws IOException {
     requireNonNull(path);
 
@@ -44,8 +49,7 @@
   }
 
   public static Volume getDefaultVolume(Configuration conf, AccumuloConfiguration acuconf) throws IOException {
-    @SuppressWarnings("deprecation")
-    String uri = acuconf.get(Property.INSTANCE_DFS_URI);
+    String uri = acuconf.get(INSTANCE_DFS_URI);
 
     // By default pull from INSTANCE_DFS_URI, falling back to the Hadoop defined
     // default filesystem (fs.defaultFS or the deprecated fs.default.name)
@@ -60,12 +64,14 @@
   }
 
   /**
-   * @see org.apache.accumulo.core.volume.VolumeConfiguration#getVolumeUris(AccumuloConfiguration,Configuration)
+   * This method gets the old configured base directory, using the URI and DIR. It will not longer be needed when we no longer support upgrading from non-volume
+   * based Accumulo config
+   *
+   * @see #getVolumeUris(AccumuloConfiguration,Configuration)
    */
-  @Deprecated
-  public static String getConfiguredBaseDir(AccumuloConfiguration conf, Configuration hadoopConfig) {
-    String singleNamespace = conf.get(Property.INSTANCE_DFS_DIR);
-    String dfsUri = conf.get(Property.INSTANCE_DFS_URI);
+  private static String getConfiguredBaseDir(AccumuloConfiguration conf, Configuration hadoopConfig) {
+    String singleNamespace = conf.get(INSTANCE_DFS_DIR);
+    String dfsUri = conf.get(INSTANCE_DFS_URI);
     String baseDir;
 
     if (dfsUri == null || dfsUri.isEmpty()) {
@@ -76,7 +82,7 @@
       }
     } else {
       if (!dfsUri.contains(":"))
-        throw new IllegalArgumentException("Expected fully qualified URI for " + Property.INSTANCE_DFS_URI.getKey() + " got " + dfsUri);
+        throw new IllegalArgumentException("Expected fully qualified URI for " + INSTANCE_DFS_URI.getKey() + " got " + dfsUri);
       baseDir = dfsUri + singleNamespace;
     }
     return baseDir;
@@ -140,10 +146,9 @@
    *          A FileSystem to write to
    * @return A Volume instance writing to the given FileSystem in the default path
    */
-  @SuppressWarnings("deprecation")
   public static <T extends FileSystem> Volume create(T fs, AccumuloConfiguration acuconf) {
-    String dfsDir = acuconf.get(Property.INSTANCE_DFS_DIR);
-    return new VolumeImpl(fs, null == dfsDir ? Property.INSTANCE_DFS_DIR.getDefaultValue() : dfsDir);
+    String dfsDir = acuconf.get(INSTANCE_DFS_DIR);
+    return new VolumeImpl(fs, null == dfsDir ? INSTANCE_DFS_DIR.getDefaultValue() : dfsDir);
   }
 
   public static <T extends FileSystem> Volume create(T fs, String basePath) {
diff --git a/core/src/test/java/org/apache/accumulo/core/cli/TestClientOpts.java b/core/src/test/java/org/apache/accumulo/core/cli/TestClientOpts.java
index 65df5c9..638f152 100644
--- a/core/src/test/java/org/apache/accumulo/core/cli/TestClientOpts.java
+++ b/core/src/test/java/org/apache/accumulo/core/cli/TestClientOpts.java
@@ -50,6 +50,10 @@
 import com.beust.jcommander.JCommander;
 
 public class TestClientOpts {
+  @SuppressWarnings("deprecation")
+  private static final Property INSTANCE_DFS_DIR = Property.INSTANCE_DFS_DIR;
+  @SuppressWarnings("deprecation")
+  private static final Property INSTANCE_DFS_URI = Property.INSTANCE_DFS_URI;
 
   @Rule
   public TemporaryFolder tmpDir = new TemporaryFolder(new File(System.getProperty("user.dir") + "/target"));
@@ -133,7 +137,6 @@
     args.getInstance();
   }
 
-  @SuppressWarnings("deprecation")
   @Test
   public void testInstanceDir() throws IOException {
     File instanceId = tmpDir.newFolder("instance_id");
@@ -146,9 +149,8 @@
     FileWriter fileWriter = new FileWriter(siteXml);
     fileWriter.append("<configuration>\n");
 
-    fileWriter
-        .append("<property><name>" + Property.INSTANCE_DFS_DIR.getKey() + "</name><value>" + tmpDir.getRoot().getAbsolutePath() + "</value></property>\n");
-    fileWriter.append("<property><name>" + Property.INSTANCE_DFS_URI.getKey() + "</name><value>file://</value></property>\n");
+    fileWriter.append("<property><name>" + INSTANCE_DFS_DIR.getKey() + "</name><value>" + tmpDir.getRoot().getAbsolutePath() + "</value></property>\n");
+    fileWriter.append("<property><name>" + INSTANCE_DFS_URI.getKey() + "</name><value>file://</value></property>\n");
     fileWriter.append("<property><name>" + ClientProperty.INSTANCE_NAME + "</name><value>foo</value></property>\n");
 
     fileWriter.append("</configuration>\n");
diff --git a/core/src/test/java/org/apache/accumulo/core/client/TestThrift1474.java b/core/src/test/java/org/apache/accumulo/core/client/TestThrift1474.java
index 845439e..621e7f2 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/TestThrift1474.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/TestThrift1474.java
@@ -64,7 +64,7 @@
     serverTransport.listen();
     int port = serverTransport.getServerSocket().getLocalPort();
     TestServer handler = new TestServer();
-    ThriftTest.Processor<ThriftTest.Iface> processor = new ThriftTest.Processor<ThriftTest.Iface>(handler);
+    ThriftTest.Processor<ThriftTest.Iface> processor = new ThriftTest.Processor<>(handler);
 
     TThreadPoolServer.Args args = new TThreadPoolServer.Args(serverTransport);
     args.stopTimeoutVal = 10;
diff --git a/core/src/test/java/org/apache/accumulo/core/client/impl/ClientContextTest.java b/core/src/test/java/org/apache/accumulo/core/client/impl/ClientContextTest.java
index 4eb348e..ab46c6c 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/impl/ClientContextTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/impl/ClientContextTest.java
@@ -29,9 +29,6 @@
 import org.junit.BeforeClass;
 import org.junit.Test;
 
-import com.google.common.base.Predicate;
-import com.google.common.base.Predicates;
-
 public class ClientContextTest {
 
   private static boolean isCredentialProviderAvailable = false;
@@ -100,8 +97,7 @@
 
     AccumuloConfiguration accClientConf = ClientContext.convertClientConfig(clientConf);
     Map<String,String> props = new HashMap<>();
-    Predicate<String> all = Predicates.alwaysTrue();
-    accClientConf.getProperties(props, all);
+    accClientConf.getProperties(props, x -> true);
 
     // Only sensitive properties are added
     Assert.assertEquals(Property.GENERAL_RPC_TIMEOUT.getDefaultValue(), props.get(Property.GENERAL_RPC_TIMEOUT.getKey()));
diff --git a/core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java b/core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
index 1d699c2..fff03fb 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
@@ -37,7 +37,6 @@
 import org.apache.accumulo.core.client.admin.DiskUsage;
 import org.apache.accumulo.core.client.admin.Locations;
 import org.apache.accumulo.core.client.admin.NewTableConfiguration;
-import org.apache.accumulo.core.client.admin.TimeType;
 import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
@@ -65,33 +64,11 @@
     public void create(String tableName) throws AccumuloException, AccumuloSecurityException, TableExistsException {}
 
     @Override
-    @Deprecated
-    public void create(String tableName, boolean limitVersion) throws AccumuloException, AccumuloSecurityException, TableExistsException {
-      create(tableName, limitVersion, TimeType.MILLIS);
-    }
-
-    @Override
-    @Deprecated
-    public void create(String tableName, boolean versioningIter, TimeType timeType) throws AccumuloException, AccumuloSecurityException, TableExistsException {}
-
-    @Override
     public void create(String tableName, NewTableConfiguration ntc) throws AccumuloException, AccumuloSecurityException, TableExistsException {}
 
     @Override
     public void addSplits(String tableName, SortedSet<Text> partitionKeys) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {}
 
-    @Deprecated
-    @Override
-    public Collection<Text> getSplits(String tableName) throws TableNotFoundException {
-      return null;
-    }
-
-    @Deprecated
-    @Override
-    public Collection<Text> getSplits(String tableName, int maxSplits) throws TableNotFoundException {
-      return null;
-    }
-
     @Override
     public Collection<Text> listSplits(String tableName) throws TableNotFoundException {
       return null;
@@ -138,10 +115,6 @@
     public void rename(String oldTableName, String newTableName) throws AccumuloSecurityException, TableNotFoundException, AccumuloException,
         TableExistsException {}
 
-    @Deprecated
-    @Override
-    public void flush(String tableName) throws AccumuloException, AccumuloSecurityException {}
-
     @Override
     public void flush(String tableName, Text start, Text end, boolean wait) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {}
 
diff --git a/core/src/test/java/org/apache/accumulo/core/client/impl/TabletLocatorImplTest.java b/core/src/test/java/org/apache/accumulo/core/client/impl/TabletLocatorImplTest.java
index bab52f6..d7988ff 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/impl/TabletLocatorImplTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/impl/TabletLocatorImplTest.java
@@ -21,7 +21,6 @@
 import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
-import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collections;
@@ -45,7 +44,6 @@
 import org.apache.accumulo.core.client.impl.TabletLocatorImpl.TabletLocationObtainer;
 import org.apache.accumulo.core.client.impl.TabletLocatorImpl.TabletServerLockChecker;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.PartialKey;
@@ -85,7 +83,6 @@
     return objs;
   }
 
-  @SuppressWarnings("unchecked")
   static Map<String,Map<KeyExtent,List<Range>>> createExpectedBinnings(Object... data) {
 
     Map<String,Map<KeyExtent,List<Range>>> expBinnedRanges = new HashMap<>();
@@ -100,6 +97,7 @@
 
       for (int j = 0; j < binData.length; j += 2) {
         KeyExtent ke = (KeyExtent) binData[j];
+        @SuppressWarnings("unchecked")
         List<Range> ranges = (List<Range>) binData[j + 1];
 
         binnedKE.put(ke, ranges);
@@ -440,36 +438,6 @@
     }
 
     @Override
-    @Deprecated
-    public Connector getConnector(String user, byte[] pass) throws AccumuloException, AccumuloSecurityException {
-      throw new UnsupportedOperationException();
-    }
-
-    @Override
-    @Deprecated
-    public Connector getConnector(String user, ByteBuffer pass) throws AccumuloException, AccumuloSecurityException {
-      throw new UnsupportedOperationException();
-    }
-
-    @Deprecated
-    @Override
-    public AccumuloConfiguration getConfiguration() {
-      throw new UnsupportedOperationException();
-    }
-
-    @Override
-    @Deprecated
-    public void setConfiguration(AccumuloConfiguration conf) {
-      throw new UnsupportedOperationException();
-    }
-
-    @Override
-    @Deprecated
-    public Connector getConnector(String user, CharSequence pass) throws AccumuloException, AccumuloSecurityException {
-      throw new UnsupportedOperationException();
-    }
-
-    @Override
     public Connector getConnector(String principal, AuthenticationToken token) throws AccumuloException, AccumuloSecurityException {
       throw new UnsupportedOperationException();
     }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
index cb28958..3db8149 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
@@ -22,12 +22,12 @@
 import java.io.ByteArrayOutputStream;
 import java.io.DataOutputStream;
 import java.io.IOException;
+import java.util.Base64;
 import java.util.List;
 
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.iterators.user.RegExFilter;
 import org.apache.accumulo.core.iterators.user.WholeRowIterator;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.hadoop.mapred.JobConf;
 import org.junit.Before;
 import org.junit.Rule;
@@ -56,7 +56,7 @@
     ByteArrayOutputStream baos = new ByteArrayOutputStream();
     is.write(new DataOutputStream(baos));
     String iterators = job.get("AccumuloInputFormat.ScanOpts.Iterators");
-    assertEquals(Base64.encodeBase64String(baos.toByteArray()), iterators);
+    assertEquals(Base64.getEncoder().encodeToString(baos.toByteArray()), iterators);
   }
 
   @Test
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapred/RangeInputSplitTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapred/RangeInputSplitTest.java
index c399fb0..47a0e53 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapred/RangeInputSplitTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapred/RangeInputSplitTest.java
@@ -33,7 +33,6 @@
 import org.apache.accumulo.core.iterators.user.SummingCombiner;
 import org.apache.accumulo.core.iterators.user.WholeRowIterator;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.hadoop.io.Text;
 import org.apache.log4j.Level;
@@ -87,7 +86,6 @@
     split.setToken(new PasswordToken("password"));
     split.setPrincipal("root");
     split.setInstanceName("instance");
-    DeprecationUtil.setMockInstance(split, true);
     split.setZooKeepers("localhost");
     split.setIterators(iterators);
     split.setLogLevel(Level.WARN);
@@ -113,7 +111,6 @@
     Assert.assertEquals(split.getToken(), newSplit.getToken());
     Assert.assertEquals(split.getPrincipal(), newSplit.getPrincipal());
     Assert.assertEquals(split.getInstanceName(), newSplit.getInstanceName());
-    Assert.assertEquals(DeprecationUtil.isMockInstanceSet(split), DeprecationUtil.isMockInstanceSet(newSplit));
     Assert.assertEquals(split.getZooKeepers(), newSplit.getZooKeepers());
     Assert.assertEquals(split.getIterators(), newSplit.getIterators());
     Assert.assertEquals(split.getLogLevel(), newSplit.getLogLevel());
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
index 3eef024..f2d2049 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
@@ -22,6 +22,7 @@
 import java.io.ByteArrayOutputStream;
 import java.io.DataOutputStream;
 import java.io.IOException;
+import java.util.Base64;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Set;
@@ -29,7 +30,6 @@
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.iterators.user.RegExFilter;
 import org.apache.accumulo.core.iterators.user.WholeRowIterator;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
@@ -51,7 +51,7 @@
     ByteArrayOutputStream baos = new ByteArrayOutputStream();
     is.write(new DataOutputStream(baos));
     String iterators = conf.get("AccumuloInputFormat.ScanOpts.Iterators");
-    assertEquals(Base64.encodeBase64String(baos.toByteArray()), iterators);
+    assertEquals(Base64.getEncoder().encodeToString(baos.toByteArray()), iterators);
   }
 
   @Test
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplitTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplitTest.java
index 0eb8010..b1b378e 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplitTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplitTest.java
@@ -33,7 +33,6 @@
 import org.apache.accumulo.core.iterators.user.SummingCombiner;
 import org.apache.accumulo.core.iterators.user.WholeRowIterator;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.hadoop.io.Text;
 import org.apache.log4j.Level;
@@ -90,7 +89,6 @@
     split.setToken(new PasswordToken("password"));
     split.setPrincipal("root");
     split.setInstanceName("instance");
-    DeprecationUtil.setMockInstance(split, true);
     split.setZooKeepers("localhost");
     split.setIterators(iterators);
     split.setLogLevel(Level.WARN);
@@ -117,7 +115,6 @@
     Assert.assertEquals(split.getToken(), newSplit.getToken());
     Assert.assertEquals(split.getPrincipal(), newSplit.getPrincipal());
     Assert.assertEquals(split.getInstanceName(), newSplit.getInstanceName());
-    Assert.assertEquals(DeprecationUtil.isMockInstanceSet(split), DeprecationUtil.isMockInstanceSet(newSplit));
     Assert.assertEquals(split.getZooKeepers(), newSplit.getZooKeepers());
     Assert.assertEquals(split.getIterators(), newSplit.getIterators());
     Assert.assertEquals(split.getLogLevel(), newSplit.getLogLevel());
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/impl/BatchInputSplitTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/impl/BatchInputSplitTest.java
index 17c781d..50b149a 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/impl/BatchInputSplitTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/impl/BatchInputSplitTest.java
@@ -33,7 +33,6 @@
 import org.apache.accumulo.core.iterators.user.SummingCombiner;
 import org.apache.accumulo.core.iterators.user.WholeRowIterator;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.hadoop.io.Text;
 import org.apache.log4j.Level;
@@ -88,7 +87,6 @@
     split.setFetchedColumns(fetchedColumns);
     split.setToken(new PasswordToken("password"));
     split.setPrincipal("root");
-    DeprecationUtil.setMockInstance(split, true);
     split.setInstanceName("instance");
     split.setZooKeepers("localhost");
     split.setIterators(iterators);
@@ -113,7 +111,6 @@
     Assert.assertEquals(split.getToken(), newSplit.getToken());
     Assert.assertEquals(split.getPrincipal(), newSplit.getPrincipal());
     Assert.assertEquals(split.getInstanceName(), newSplit.getInstanceName());
-    Assert.assertEquals(DeprecationUtil.isMockInstanceSet(split), DeprecationUtil.isMockInstanceSet(newSplit));
     Assert.assertEquals(split.getZooKeepers(), newSplit.getZooKeepers());
     Assert.assertEquals(split.getIterators(), newSplit.getIterators());
     Assert.assertEquals(split.getLogLevel(), newSplit.getLogLevel());
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
index a7e5e0a..1eb29e3 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
@@ -20,16 +20,16 @@
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.util.Base64;
+
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
-import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.ZooKeeperInstance;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.AuthenticationTokenSerializer;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.log4j.Level;
 import org.apache.log4j.Logger;
@@ -63,7 +63,8 @@
     assertEquals(PasswordToken.class, token.getClass());
     assertEquals(new PasswordToken("testPassword"), token);
     assertEquals(
-        "inline:" + PasswordToken.class.getName() + ":" + Base64.encodeBase64String(AuthenticationTokenSerializer.serialize(new PasswordToken("testPassword"))),
+        "inline:" + PasswordToken.class.getName() + ":"
+            + Base64.getEncoder().encodeToString(AuthenticationTokenSerializer.serialize(new PasswordToken("testPassword"))),
         conf.get(ConfiguratorBase.enumToConfKey(this.getClass(), ConfiguratorBase.ConnectorInfo.TOKEN)));
   }
 
@@ -99,19 +100,6 @@
     // assertEquals(1234000, ((ZooKeeperInstance) instance).getZooKeepersSessionTimeOut());
   }
 
-  @SuppressWarnings("deprecation")
-  @Test
-  public void testSetMockInstance() {
-    Class<?> mockClass = org.apache.accumulo.core.client.mock.MockInstance.class;
-    Configuration conf = new Configuration();
-    ConfiguratorBase.setMockInstance(this.getClass(), conf, "testInstanceName");
-    assertEquals("testInstanceName", conf.get(ConfiguratorBase.enumToConfKey(this.getClass(), ConfiguratorBase.InstanceOpts.NAME)));
-    assertEquals(null, conf.get(ConfiguratorBase.enumToConfKey(this.getClass(), ConfiguratorBase.InstanceOpts.ZOO_KEEPERS)));
-    assertEquals(mockClass.getSimpleName(), conf.get(ConfiguratorBase.enumToConfKey(this.getClass(), ConfiguratorBase.InstanceOpts.TYPE)));
-    Instance instance = ConfiguratorBase.getInstance(this.getClass(), conf);
-    assertEquals(mockClass.getName(), instance.getClass().getName());
-  }
-
   @Test
   public void testSetLogLevel() {
     Configuration conf = new Configuration();
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mock/MockConnectorTest.java b/core/src/test/java/org/apache/accumulo/core/client/mock/MockConnectorTest.java
deleted file mode 100644
index b70cb00..0000000
--- a/core/src/test/java/org/apache/accumulo/core/client/mock/MockConnectorTest.java
+++ /dev/null
@@ -1,375 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map.Entry;
-import java.util.Random;
-import java.util.concurrent.TimeUnit;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.BatchDeleter;
-import org.apache.accumulo.core.client.BatchScanner;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Instance;
-import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.MultiTableBatchWriter;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.Combiner;
-import org.apache.accumulo.core.iterators.user.SummingCombiner;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.hadoop.io.Text;
-import org.junit.Assert;
-import org.junit.Test;
-
-import com.google.common.collect.Iterators;
-
-@Deprecated
-public class MockConnectorTest {
-  Random random = new Random();
-
-  static Text asText(int i) {
-    return new Text(Integer.toHexString(i));
-  }
-
-  @Test
-  public void testSunnyDay() throws Exception {
-    Connector c = new MockConnector("root", new MockInstance());
-    c.tableOperations().create("test");
-    BatchWriter bw = c.createBatchWriter("test", new BatchWriterConfig());
-    for (int i = 0; i < 100; i++) {
-      int r = random.nextInt();
-      Mutation m = new Mutation(asText(r));
-      m.put(asText(random.nextInt()), asText(random.nextInt()), new Value(Integer.toHexString(r).getBytes()));
-      bw.addMutation(m);
-    }
-    bw.close();
-    BatchScanner s = c.createBatchScanner("test", Authorizations.EMPTY, 2);
-    s.setRanges(Collections.singletonList(new Range()));
-    Key key = null;
-    int count = 0;
-    for (Entry<Key,Value> entry : s) {
-      if (key != null)
-        assertTrue(key.compareTo(entry.getKey()) < 0);
-      assertEquals(entry.getKey().getRow(), new Text(entry.getValue().get()));
-      key = entry.getKey();
-      count++;
-    }
-    assertEquals(100, count);
-  }
-
-  @Test
-  public void testChangeAuths() throws Exception {
-    Connector c = new MockConnector("root", new MockInstance());
-    c.securityOperations().createLocalUser("greg", new PasswordToken(new byte[0]));
-    assertTrue(c.securityOperations().getUserAuthorizations("greg").isEmpty());
-    c.securityOperations().changeUserAuthorizations("greg", new Authorizations("A".getBytes()));
-    assertTrue(c.securityOperations().getUserAuthorizations("greg").contains("A".getBytes()));
-    c.securityOperations().changeUserAuthorizations("greg", new Authorizations("X", "Y", "Z"));
-    assertTrue(c.securityOperations().getUserAuthorizations("greg").contains("X".getBytes()));
-    assertFalse(c.securityOperations().getUserAuthorizations("greg").contains("A".getBytes()));
-  }
-
-  @Test
-  public void testBadMutations() throws Exception {
-    Connector c = new MockConnector("root", new MockInstance());
-    c.tableOperations().create("test");
-    BatchWriter bw = c
-        .createBatchWriter("test", new BatchWriterConfig().setMaxMemory(10000L).setMaxLatency(1000L, TimeUnit.MILLISECONDS).setMaxWriteThreads(4));
-
-    try {
-      bw.addMutation(null);
-      Assert.fail("addMutation should throw IAE for null mutation");
-    } catch (IllegalArgumentException iae) {}
-    try {
-      bw.addMutations(null);
-      Assert.fail("addMutations should throw IAE for null iterable");
-    } catch (IllegalArgumentException iae) {}
-
-    bw.addMutations(Collections.<Mutation> emptyList());
-
-    Mutation bad = new Mutation("bad");
-    try {
-      bw.addMutation(bad);
-      Assert.fail("addMutation should throw IAE for empty mutation");
-    } catch (IllegalArgumentException iae) {}
-
-    Mutation good = new Mutation("good");
-    good.put(asText(random.nextInt()), asText(random.nextInt()), new Value("good".getBytes()));
-    List<Mutation> mutations = new ArrayList<>();
-    mutations.add(good);
-    mutations.add(bad);
-    try {
-      bw.addMutations(mutations);
-      Assert.fail("addMutations should throw IAE if it contains empty mutation");
-    } catch (IllegalArgumentException iae) {}
-
-    bw.close();
-  }
-
-  @Test
-  public void testAggregation() throws Exception {
-    MockInstance mockInstance = new MockInstance();
-    Connector c = mockInstance.getConnector("root", new PasswordToken(""));
-    String table = "perDayCounts";
-    c.tableOperations().create(table);
-    IteratorSetting is = new IteratorSetting(10, "String Summation", SummingCombiner.class);
-    Combiner.setColumns(is, Collections.singletonList(new IteratorSetting.Column("day")));
-    SummingCombiner.setEncodingType(is, SummingCombiner.Type.STRING);
-    c.tableOperations().attachIterator(table, is);
-    String keys[][] = { {"foo", "day", "20080101"}, {"foo", "day", "20080101"}, {"foo", "day", "20080103"}, {"bar", "day", "20080101"},
-        {"bar", "day", "20080101"},};
-    BatchWriter bw = c.createBatchWriter("perDayCounts", new BatchWriterConfig());
-    for (String elt[] : keys) {
-      Mutation m = new Mutation(new Text(elt[0]));
-      m.put(new Text(elt[1]), new Text(elt[2]), new Value("1".getBytes()));
-      bw.addMutation(m);
-    }
-    bw.close();
-
-    Scanner s = c.createScanner("perDayCounts", Authorizations.EMPTY);
-    Iterator<Entry<Key,Value>> iterator = s.iterator();
-    assertTrue(iterator.hasNext());
-    checkEntry(iterator.next(), "bar", "day", "20080101", "2");
-    assertTrue(iterator.hasNext());
-    checkEntry(iterator.next(), "foo", "day", "20080101", "2");
-    assertTrue(iterator.hasNext());
-    checkEntry(iterator.next(), "foo", "day", "20080103", "1");
-    assertFalse(iterator.hasNext());
-  }
-
-  @Test
-  public void testDelete() throws Exception {
-    Connector c = new MockConnector("root", new MockInstance());
-    c.tableOperations().create("test");
-    BatchWriter bw = c.createBatchWriter("test", new BatchWriterConfig());
-
-    Mutation m1 = new Mutation("r1");
-
-    m1.put("cf1", "cq1", 1, "v1");
-
-    bw.addMutation(m1);
-    bw.flush();
-
-    Mutation m2 = new Mutation("r1");
-
-    m2.putDelete("cf1", "cq1", 2);
-
-    bw.addMutation(m2);
-    bw.flush();
-
-    Scanner scanner = c.createScanner("test", Authorizations.EMPTY);
-
-    int count = Iterators.size(scanner.iterator());
-
-    assertEquals(0, count);
-
-    try {
-      c.tableOperations().create("test_this_$tableName");
-      assertTrue(false);
-
-    } catch (IllegalArgumentException iae) {
-
-    }
-  }
-
-  @Test
-  public void testDeletewithBatchDeleter() throws Exception {
-    Connector c = new MockConnector("root", new MockInstance());
-
-    // make sure we are using a clean table
-    if (c.tableOperations().exists("test"))
-      c.tableOperations().delete("test");
-    c.tableOperations().create("test");
-
-    BatchDeleter deleter = c.createBatchDeleter("test", Authorizations.EMPTY, 2, new BatchWriterConfig());
-    // first make sure it deletes fine when its empty
-    deleter.setRanges(Collections.singletonList(new Range(("r1"))));
-    deleter.delete();
-    this.checkRemaining(c, "test", 0);
-
-    // test deleting just one row
-    BatchWriter writer = c.createBatchWriter("test", new BatchWriterConfig());
-    Mutation m = new Mutation("r1");
-    m.put("fam", "qual", "value");
-    writer.addMutation(m);
-
-    // make sure the write goes through
-    writer.flush();
-    writer.close();
-
-    deleter.setRanges(Collections.singletonList(new Range(("r1"))));
-    deleter.delete();
-    this.checkRemaining(c, "test", 0);
-
-    // test multi row deletes
-    writer = c.createBatchWriter("test", new BatchWriterConfig());
-    m = new Mutation("r1");
-    m.put("fam", "qual", "value");
-    writer.addMutation(m);
-    Mutation m2 = new Mutation("r2");
-    m2.put("fam", "qual", "value");
-    writer.addMutation(m2);
-
-    // make sure the write goes through
-    writer.flush();
-    writer.close();
-
-    deleter.setRanges(Collections.singletonList(new Range(("r1"))));
-    deleter.delete();
-    checkRemaining(c, "test", 1);
-  }
-
-  /**
-   * Test to make sure that a certain number of rows remain
-   *
-   * @param c
-   *          connector to the {@link MockInstance}
-   * @param tableName
-   *          Table to check
-   * @param count
-   *          number of entries to expect in the table
-   */
-  private void checkRemaining(Connector c, String tableName, int count) throws Exception {
-    Scanner scanner = c.createScanner(tableName, Authorizations.EMPTY);
-
-    int total = Iterators.size(scanner.iterator());
-    assertEquals(count, total);
-  }
-
-  @Test
-  public void testCMod() throws Exception {
-    // test writing to a table that the is being scanned
-    Connector c = new MockConnector("root", new MockInstance());
-    c.tableOperations().create("test");
-    BatchWriter bw = c.createBatchWriter("test", new BatchWriterConfig());
-
-    for (int i = 0; i < 10; i++) {
-      Mutation m1 = new Mutation("r" + i);
-      m1.put("cf1", "cq1", 1, "v" + i);
-      bw.addMutation(m1);
-    }
-
-    bw.flush();
-
-    int count = 10;
-
-    Scanner scanner = c.createScanner("test", Authorizations.EMPTY);
-    for (Entry<Key,Value> entry : scanner) {
-      Key key = entry.getKey();
-      Mutation m = new Mutation(key.getRow());
-      m.put(key.getColumnFamily().toString(), key.getColumnQualifier().toString(), key.getTimestamp() + 1, "v" + (count));
-      count++;
-      bw.addMutation(m);
-    }
-
-    bw.flush();
-
-    count = 10;
-
-    for (Entry<Key,Value> entry : scanner) {
-      assertEquals(entry.getValue().toString(), "v" + (count++));
-    }
-
-    assertEquals(count, 20);
-
-    try {
-      c.tableOperations().create("test_this_$tableName");
-      assertTrue(false);
-
-    } catch (IllegalArgumentException iae) {
-
-    }
-  }
-
-  private void checkEntry(Entry<Key,Value> next, String row, String cf, String cq, String value) {
-    assertEquals(row, next.getKey().getRow().toString());
-    assertEquals(cf, next.getKey().getColumnFamily().toString());
-    assertEquals(cq, next.getKey().getColumnQualifier().toString());
-    assertEquals(value, next.getValue().toString());
-  }
-
-  @Test
-  public void testMockMultiTableBatchWriter() throws Exception {
-    Connector c = new MockConnector("root", new MockInstance());
-    c.tableOperations().create("a");
-    c.tableOperations().create("b");
-    MultiTableBatchWriter bw = c.createMultiTableBatchWriter(new BatchWriterConfig());
-    Mutation m1 = new Mutation("r1");
-    m1.put("cf1", "cq1", 1, "v1");
-    BatchWriter b = bw.getBatchWriter("a");
-    b.addMutation(m1);
-    b.flush();
-    b = bw.getBatchWriter("b");
-    b.addMutation(m1);
-    b.flush();
-
-    Scanner scanner = c.createScanner("a", Authorizations.EMPTY);
-    int count = Iterators.size(scanner.iterator());
-    assertEquals(1, count);
-    scanner = c.createScanner("b", Authorizations.EMPTY);
-    count = Iterators.size(scanner.iterator());
-    assertEquals(1, count);
-
-  }
-
-  @Test
-  public void testUpdate() throws Exception {
-    Connector c = new MockConnector("root", new MockInstance());
-    c.tableOperations().create("test");
-    BatchWriter bw = c.createBatchWriter("test", new BatchWriterConfig());
-
-    for (int i = 0; i < 10; i++) {
-      Mutation m = new Mutation("r1");
-      m.put("cf1", "cq1", "" + i);
-      bw.addMutation(m);
-    }
-
-    bw.close();
-
-    Scanner scanner = c.createScanner("test", Authorizations.EMPTY);
-
-    Entry<Key,Value> entry = scanner.iterator().next();
-
-    assertEquals("9", entry.getValue().toString());
-
-  }
-
-  @Test
-  public void testMockConnectorReturnsCorrectInstance() throws AccumuloException, AccumuloSecurityException {
-    String name = "an-interesting-instance-name";
-    Instance mockInstance = new MockInstance(name);
-    assertEquals(mockInstance, mockInstance.getConnector("foo", new PasswordToken("bar")).getInstance());
-    assertEquals(name, mockInstance.getConnector("foo", new PasswordToken("bar")).getInstance().getInstanceName());
-  }
-
-}
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mock/MockNamespacesTest.java b/core/src/test/java/org/apache/accumulo/core/client/mock/MockNamespacesTest.java
deleted file mode 100644
index ca12838..0000000
--- a/core/src/test/java/org/apache/accumulo/core/client/mock/MockNamespacesTest.java
+++ /dev/null
@@ -1,297 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.accumulo.core.client.mock;
-
-import static java.nio.charset.StandardCharsets.UTF_8;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
-
-import java.util.EnumSet;
-import java.util.HashSet;
-import java.util.Map.Entry;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Instance;
-import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.NamespaceNotEmptyException;
-import org.apache.accumulo.core.client.NamespaceNotFoundException;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.admin.NamespaceOperations;
-import org.apache.accumulo.core.client.impl.Namespaces;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.Filter;
-import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
-import org.apache.accumulo.core.security.Authorizations;
-import org.junit.Before;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.rules.TestName;
-
-@Deprecated
-public class MockNamespacesTest {
-
-  @Rule
-  public TestName test = new TestName();
-
-  private Connector conn;
-
-  @Before
-  public void setupInstance() throws Exception {
-    Instance inst = new MockInstance(test.getMethodName());
-    conn = inst.getConnector("user", new PasswordToken("pass"));
-  }
-
-  /**
-   * This test creates a table without specifying a namespace. In this case, it puts the table into the default namespace.
-   */
-  @Test
-  public void testDefaultNamespace() throws Exception {
-    String tableName = "test";
-
-    assertTrue(conn.namespaceOperations().exists(Namespaces.DEFAULT_NAMESPACE));
-    conn.tableOperations().create(tableName);
-    assertTrue(conn.tableOperations().exists(tableName));
-  }
-
-  /**
-   * This test creates a new namespace "testing" and a table "testing.table1" which puts "table1" into the "testing" namespace. Then we create "testing.table2"
-   * which creates "table2" and puts it into "testing" as well. Then we make sure that you can't delete a namespace with tables in it, and then we delete the
-   * tables and delete the namespace.
-   */
-  @Test
-  public void testCreateAndDeleteNamespace() throws Exception {
-    String namespace = "testing";
-    String tableName1 = namespace + ".table1";
-    String tableName2 = namespace + ".table2";
-
-    conn.namespaceOperations().create(namespace);
-    assertTrue(conn.namespaceOperations().exists(namespace));
-
-    conn.tableOperations().create(tableName1);
-    assertTrue(conn.tableOperations().exists(tableName1));
-
-    conn.tableOperations().create(tableName2);
-    assertTrue(conn.tableOperations().exists(tableName2));
-
-    // deleting
-    try {
-      // can't delete a namespace with tables in it
-      conn.namespaceOperations().delete(namespace);
-      fail();
-    } catch (NamespaceNotEmptyException e) {
-      // ignore, supposed to happen
-    }
-    assertTrue(conn.namespaceOperations().exists(namespace));
-    assertTrue(conn.tableOperations().exists(tableName1));
-    assertTrue(conn.tableOperations().exists(tableName2));
-
-    conn.tableOperations().delete(tableName2);
-    assertTrue(!conn.tableOperations().exists(tableName2));
-    assertTrue(conn.namespaceOperations().exists(namespace));
-
-    conn.tableOperations().delete(tableName1);
-    assertTrue(!conn.tableOperations().exists(tableName1));
-    conn.namespaceOperations().delete(namespace);
-    assertTrue(!conn.namespaceOperations().exists(namespace));
-  }
-
-  /**
-   * This test creates a namespace, modifies it's properties, and checks to make sure that those properties are applied to its tables. To do something on a
-   * namespace-wide level, use {@link NamespaceOperations}.
-   *
-   * Checks to make sure namespace-level properties are overridden by table-level properties.
-   *
-   * Checks to see if the default namespace's properties work as well.
-   */
-
-  @Test
-  public void testNamespaceProperties() throws Exception {
-    String namespace = "propchange";
-    String tableName1 = namespace + ".table1";
-    String tableName2 = namespace + ".table2";
-
-    String propKey = Property.TABLE_SCAN_MAXMEM.getKey();
-    String propVal = "42K";
-
-    conn.namespaceOperations().create(namespace);
-    conn.tableOperations().create(tableName1);
-    conn.namespaceOperations().setProperty(namespace, propKey, propVal);
-
-    // check the namespace has the property
-    assertTrue(checkNamespaceHasProp(conn, namespace, propKey, propVal));
-
-    // check that the table gets it from the namespace
-    assertTrue(checkTableHasProp(conn, tableName1, propKey, propVal));
-
-    // test a second table to be sure the first wasn't magical
-    // (also, changed the order, the namespace has the property already)
-    conn.tableOperations().create(tableName2);
-    assertTrue(checkTableHasProp(conn, tableName2, propKey, propVal));
-
-    // test that table properties override namespace properties
-    String propKey2 = Property.TABLE_FILE_MAX.getKey();
-    String propVal2 = "42";
-    String tablePropVal = "13";
-
-    conn.tableOperations().setProperty(tableName2, propKey2, tablePropVal);
-    conn.namespaceOperations().setProperty("propchange", propKey2, propVal2);
-
-    assertTrue(checkTableHasProp(conn, tableName2, propKey2, tablePropVal));
-
-    // now check that you can change the default namespace's properties
-    propVal = "13K";
-    String tableName = "some_table";
-    conn.tableOperations().create(tableName);
-    conn.namespaceOperations().setProperty(Namespaces.DEFAULT_NAMESPACE, propKey, propVal);
-
-    assertTrue(checkTableHasProp(conn, tableName, propKey, propVal));
-
-    // test the properties server-side by configuring an iterator.
-    // should not show anything with column-family = 'a'
-    String tableName3 = namespace + ".table3";
-    conn.tableOperations().create(tableName3);
-
-    IteratorSetting setting = new IteratorSetting(250, "thing", SimpleFilter.class.getName());
-    conn.namespaceOperations().attachIterator(namespace, setting);
-
-    BatchWriter bw = conn.createBatchWriter(tableName3, new BatchWriterConfig());
-    Mutation m = new Mutation("r");
-    m.put("a", "b", new Value("abcde".getBytes()));
-    bw.addMutation(m);
-    bw.flush();
-    bw.close();
-
-    // Scanner s = c.createScanner(tableName3, Authorizations.EMPTY);
-    // do scanners work correctly in mock?
-    // assertTrue(!s.iterator().hasNext());
-  }
-
-  /**
-   * This test renames and clones two separate table into different namespaces. different namespace.
-   */
-  @Test
-  public void testRenameAndCloneTableToNewNamespace() throws Exception {
-    String namespace1 = "renamed";
-    String namespace2 = "cloned";
-    String tableName = "table";
-    String tableName1 = "renamed.table1";
-    // String tableName2 = "cloned.table2";
-
-    conn.tableOperations().create(tableName);
-    conn.namespaceOperations().create(namespace1);
-    conn.namespaceOperations().create(namespace2);
-
-    conn.tableOperations().rename(tableName, tableName1);
-
-    assertTrue(conn.tableOperations().exists(tableName1));
-    assertTrue(!conn.tableOperations().exists(tableName));
-
-    // TODO implement clone in mock
-    // c.tableOperations().clone(tableName1, tableName2, false, null, null);
-    // assertTrue(c.tableOperations().exists(tableName1)); assertTrue(c.tableOperations().exists(tableName2));
-  }
-
-  /**
-   * This test renames a namespace and ensures that its tables are still correct
-   */
-  @Test
-  public void testNamespaceRename() throws Exception {
-    String namespace1 = "n1";
-    String namespace2 = "n2";
-    String table = "t";
-
-    conn.namespaceOperations().create(namespace1);
-    conn.tableOperations().create(namespace1 + "." + table);
-
-    conn.namespaceOperations().rename(namespace1, namespace2);
-
-    assertTrue(!conn.namespaceOperations().exists(namespace1));
-    assertTrue(conn.namespaceOperations().exists(namespace2));
-    assertTrue(!conn.tableOperations().exists(namespace1 + "." + table));
-    assertTrue(conn.tableOperations().exists(namespace2 + "." + table));
-  }
-
-  /**
-   * This tests adding iterators to a namespace, listing them, and removing them
-   */
-  @Test
-  public void testNamespaceIterators() throws Exception {
-    String namespace = "iterator";
-    String tableName = namespace + ".table";
-    String iter = "thing";
-
-    conn.namespaceOperations().create(namespace);
-    conn.tableOperations().create(tableName);
-
-    IteratorSetting setting = new IteratorSetting(250, iter, SimpleFilter.class.getName());
-    HashSet<IteratorScope> scope = new HashSet<>();
-    scope.add(IteratorScope.scan);
-    conn.namespaceOperations().attachIterator(namespace, setting, EnumSet.copyOf(scope));
-
-    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
-    Mutation m = new Mutation("r");
-    m.put("a", "b", new Value("abcde".getBytes(UTF_8)));
-    bw.addMutation(m);
-    bw.flush();
-
-    Scanner s = conn.createScanner(tableName, Authorizations.EMPTY);
-    System.out.println(s.iterator().next());
-    // do scanners work correctly in mock?
-    // assertTrue(!s.iterator().hasNext());
-
-    assertTrue(conn.namespaceOperations().listIterators(namespace).containsKey(iter));
-    conn.namespaceOperations().removeIterator(namespace, iter, EnumSet.copyOf(scope));
-  }
-
-  private boolean checkTableHasProp(Connector c, String t, String propKey, String propVal) throws AccumuloException, TableNotFoundException {
-    for (Entry<String,String> e : c.tableOperations().getProperties(t)) {
-      if (e.getKey().equals(propKey) && e.getValue().equals(propVal)) {
-        return true;
-      }
-    }
-    return false;
-  }
-
-  private boolean checkNamespaceHasProp(Connector c, String n, String propKey, String propVal) throws AccumuloException, NamespaceNotFoundException,
-      AccumuloSecurityException {
-    for (Entry<String,String> e : c.namespaceOperations().getProperties(n)) {
-      if (e.getKey().equals(propKey) && e.getValue().equals(propVal)) {
-        return true;
-      }
-    }
-    return false;
-  }
-
-  public static class SimpleFilter extends Filter {
-    @Override
-    public boolean accept(Key k, Value v) {
-      if (k.getColumnFamily().toString().equals("a"))
-        return false;
-      return true;
-    }
-  }
-}
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mock/MockTableOperationsTest.java b/core/src/test/java/org/apache/accumulo/core/client/mock/MockTableOperationsTest.java
deleted file mode 100644
index 58f3777..0000000
--- a/core/src/test/java/org/apache/accumulo/core/client/mock/MockTableOperationsTest.java
+++ /dev/null
@@ -1,345 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import java.io.IOException;
-import java.net.URI;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.EnumSet;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-import java.util.Map.Entry;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.BatchScanner;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Instance;
-import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.MutationsRejectedException;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.TableExistsException;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.admin.NewTableConfiguration;
-import org.apache.accumulo.core.client.admin.TableOperations;
-import org.apache.accumulo.core.client.admin.TimeType;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.file.FileOperations;
-import org.apache.accumulo.core.file.FileSKVWriter;
-import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
-import org.apache.accumulo.core.iterators.user.VersioningIterator;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.security.ColumnVisibility;
-import org.apache.accumulo.core.util.Pair;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.rules.TestName;
-
-import com.google.common.collect.Iterators;
-
-@Deprecated
-public class MockTableOperationsTest {
-
-  @Rule
-  public TestName test = new TestName();
-
-  private Connector conn;
-
-  @Before
-  public void setupInstance() throws Exception {
-    Instance inst = new MockInstance(test.getMethodName());
-    conn = inst.getConnector("user", new PasswordToken("pass"));
-  }
-
-  @Test
-  public void testCreateUseVersions() throws AccumuloException, AccumuloSecurityException, TableExistsException, TableNotFoundException {
-    String t = "tableName1";
-
-    {
-      conn.tableOperations().create(t, new NewTableConfiguration().withoutDefaultIterators().setTimeType(TimeType.LOGICAL));
-
-      writeVersionable(conn, t, 3);
-      assertVersionable(conn, t, 3);
-
-      IteratorSetting settings = new IteratorSetting(20, VersioningIterator.class);
-      conn.tableOperations().attachIterator(t, settings);
-
-      assertVersionable(conn, t, 1);
-
-      conn.tableOperations().delete(t);
-    }
-
-    {
-      conn.tableOperations().create(t, new NewTableConfiguration().setTimeType(TimeType.MILLIS));
-
-      try {
-        IteratorSetting settings = new IteratorSetting(20, VersioningIterator.class);
-        conn.tableOperations().attachIterator(t, settings);
-        Assert.fail();
-      } catch (AccumuloException ex) {}
-
-      writeVersionable(conn, t, 3);
-      assertVersionable(conn, t, 1);
-
-      conn.tableOperations().delete(t);
-    }
-  }
-
-  protected void writeVersionable(Connector c, String tableName, int size) throws TableNotFoundException, MutationsRejectedException {
-    for (int i = 0; i < size; i++) {
-      BatchWriter w = c.createBatchWriter(tableName, new BatchWriterConfig());
-      Mutation m = new Mutation("row1");
-      m.put("cf", "cq", String.valueOf(i));
-      w.addMutation(m);
-      w.close();
-    }
-  }
-
-  protected void assertVersionable(Connector c, String tableName, int size) throws TableNotFoundException {
-    BatchScanner s = c.createBatchScanner(tableName, Authorizations.EMPTY, 1);
-    s.setRanges(Collections.singleton(Range.exact("row1", "cf", "cq")));
-    int count = 0;
-    for (Map.Entry<Key,Value> e : s) {
-      Assert.assertEquals("row1", e.getKey().getRow().toString());
-      Assert.assertEquals("cf", e.getKey().getColumnFamily().toString());
-      Assert.assertEquals("cq", e.getKey().getColumnQualifier().toString());
-      count++;
-
-    }
-    Assert.assertEquals(size, count);
-    s.close();
-  }
-
-  @Test
-  public void testTableNotFound() throws AccumuloException, AccumuloSecurityException, TableExistsException, TableNotFoundException {
-    IteratorSetting setting = new IteratorSetting(100, "myvers", VersioningIterator.class);
-    String t = "tableName";
-    try {
-      conn.tableOperations().attachIterator(t, setting);
-      Assert.fail();
-    } catch (TableNotFoundException e) {}
-    try {
-      conn.tableOperations().checkIteratorConflicts(t, setting, EnumSet.allOf(IteratorScope.class));
-      Assert.fail();
-    } catch (TableNotFoundException e) {}
-    try {
-      conn.tableOperations().delete(t);
-      Assert.fail();
-    } catch (TableNotFoundException e) {}
-    try {
-      conn.tableOperations().getIteratorSetting(t, "myvers", IteratorScope.scan);
-      Assert.fail();
-    } catch (TableNotFoundException e) {}
-    try {
-      conn.tableOperations().getProperties(t);
-      Assert.fail();
-    } catch (TableNotFoundException e) {}
-    try {
-      conn.tableOperations().listSplits(t);
-      Assert.fail();
-    } catch (TableNotFoundException e) {}
-    try {
-      conn.tableOperations().listIterators(t);
-      Assert.fail();
-    } catch (TableNotFoundException e) {}
-    try {
-      conn.tableOperations().removeIterator(t, null, EnumSet.noneOf(IteratorScope.class));
-      Assert.fail();
-    } catch (TableNotFoundException e) {}
-    try {
-      conn.tableOperations().rename(t, t);
-      Assert.fail();
-    } catch (TableNotFoundException e) {}
-    conn.tableOperations().create(t);
-    try {
-      conn.tableOperations().create(t);
-      Assert.fail();
-    } catch (TableExistsException e) {}
-    try {
-      conn.tableOperations().rename(t, t);
-      Assert.fail();
-    } catch (TableExistsException e) {}
-  }
-
-  private static class ImportTestFilesAndData {
-    Path importPath;
-    Path failurePath;
-    List<Pair<Key,Value>> keyVals;
-  }
-
-  @Test
-  public void testImport() throws Throwable {
-    ImportTestFilesAndData dataAndFiles = prepareTestFiles();
-    TableOperations tableOperations = conn.tableOperations();
-    tableOperations.create("a_table");
-    tableOperations.importDirectory("a_table", dataAndFiles.importPath.toString(), dataAndFiles.failurePath.toString(), false);
-    Scanner scanner = conn.createScanner("a_table", new Authorizations());
-    Iterator<Entry<Key,Value>> iterator = scanner.iterator();
-    for (int i = 0; i < 5; i++) {
-      Assert.assertTrue(iterator.hasNext());
-      Entry<Key,Value> kv = iterator.next();
-      Pair<Key,Value> expected = dataAndFiles.keyVals.get(i);
-      Assert.assertEquals(expected.getFirst(), kv.getKey());
-      Assert.assertEquals(expected.getSecond(), kv.getValue());
-    }
-    Assert.assertFalse(iterator.hasNext());
-  }
-
-  private ImportTestFilesAndData prepareTestFiles() throws Throwable {
-    Configuration defaultConf = new Configuration();
-    Path tempFile = new Path("target/accumulo-test/import/sample.rf");
-    Path failures = new Path("target/accumulo-test/failures/");
-    FileSystem fs = FileSystem.get(new URI("file:///"), defaultConf);
-    fs.deleteOnExit(tempFile);
-    fs.deleteOnExit(failures);
-    fs.delete(failures, true);
-    fs.delete(tempFile, true);
-    fs.mkdirs(failures);
-    fs.mkdirs(tempFile.getParent());
-    FileSKVWriter writer = FileOperations.getInstance().newWriterBuilder().forFile(tempFile.toString(), fs, defaultConf)
-        .withTableConfiguration(AccumuloConfiguration.getDefaultConfiguration()).build();
-    writer.startDefaultLocalityGroup();
-    List<Pair<Key,Value>> keyVals = new ArrayList<>();
-    for (int i = 0; i < 5; i++) {
-      keyVals.add(new Pair<>(new Key("a" + i, "b" + i, "c" + i, new ColumnVisibility(""), 1000l + i), new Value(Integer.toString(i).getBytes())));
-    }
-    for (Pair<Key,Value> keyVal : keyVals) {
-      writer.append(keyVal.getFirst(), keyVal.getSecond());
-    }
-    writer.close();
-    ImportTestFilesAndData files = new ImportTestFilesAndData();
-    files.failurePath = failures;
-    files.importPath = tempFile.getParent();
-    files.keyVals = keyVals;
-    return files;
-  }
-
-  @Test(expected = TableNotFoundException.class)
-  public void testFailsWithNoTable() throws Throwable {
-    TableOperations tableOperations = conn.tableOperations();
-    ImportTestFilesAndData testFiles = prepareTestFiles();
-    tableOperations.importDirectory("doesnt_exist_table", testFiles.importPath.toString(), testFiles.failurePath.toString(), false);
-  }
-
-  @Test(expected = IOException.class)
-  public void testFailsWithNonEmptyFailureDirectory() throws Throwable {
-    TableOperations tableOperations = conn.tableOperations();
-    ImportTestFilesAndData testFiles = prepareTestFiles();
-    FileSystem fs = testFiles.failurePath.getFileSystem(new Configuration());
-    fs.open(testFiles.failurePath.suffix("/something")).close();
-    tableOperations.importDirectory("doesnt_exist_table", testFiles.importPath.toString(), testFiles.failurePath.toString(), false);
-  }
-
-  @Test
-  public void testDeleteRows() throws Exception {
-    TableOperations to = conn.tableOperations();
-    to.create("test");
-    BatchWriter bw = conn.createBatchWriter("test", new BatchWriterConfig());
-    for (int r = 0; r < 20; r++) {
-      Mutation m = new Mutation("" + r);
-      for (int c = 0; c < 5; c++) {
-        m.put(new Text("cf"), new Text("" + c), new Value(("" + c).getBytes()));
-      }
-      bw.addMutation(m);
-    }
-    bw.flush();
-    to.deleteRows("test", new Text("1"), new Text("2"));
-    Scanner s = conn.createScanner("test", Authorizations.EMPTY);
-    int oneCnt = 0;
-    for (Entry<Key,Value> entry : s) {
-      char rowStart = entry.getKey().getRow().toString().charAt(0);
-      Assert.assertTrue(rowStart != '2');
-      oneCnt += rowStart == '1' ? 1 : 0;
-    }
-    Assert.assertEquals(5, oneCnt);
-  }
-
-  @Test
-  public void testDeleteRowsWithNullKeys() throws Exception {
-    TableOperations to = conn.tableOperations();
-    to.create("test2");
-    BatchWriter bw = conn.createBatchWriter("test2", new BatchWriterConfig());
-    for (int r = 0; r < 30; r++) {
-      Mutation m = new Mutation(Integer.toString(r));
-      for (int c = 0; c < 5; c++) {
-        m.put(new Text("cf"), new Text(Integer.toString(c)), new Value(Integer.toString(c).getBytes()));
-      }
-      bw.addMutation(m);
-    }
-    bw.flush();
-
-    // test null end
-    // will remove rows 4 through 9 (6 * 5 = 30 entries)
-    to.deleteRows("test2", new Text("30"), null);
-    Scanner s = conn.createScanner("test2", Authorizations.EMPTY);
-    int rowCnt = 0;
-    for (Entry<Key,Value> entry : s) {
-      String rowId = entry.getKey().getRow().toString();
-      Assert.assertFalse(rowId.startsWith("30"));
-      rowCnt++;
-    }
-    s.close();
-    Assert.assertEquals(120, rowCnt);
-
-    // test null start
-    // will remove 0-1, 10-19, 2
-    to.deleteRows("test2", null, new Text("2"));
-    s = conn.createScanner("test2", Authorizations.EMPTY);
-    rowCnt = 0;
-    for (Entry<Key,Value> entry : s) {
-      char rowStart = entry.getKey().getRow().toString().charAt(0);
-      Assert.assertTrue(rowStart >= '2');
-      rowCnt++;
-    }
-    s.close();
-    Assert.assertEquals(55, rowCnt);
-
-    // test null start and end
-    // deletes everything still left
-    to.deleteRows("test2", null, null);
-    s = conn.createScanner("test2", Authorizations.EMPTY);
-    rowCnt = Iterators.size(s.iterator());
-    s.close();
-    to.delete("test2");
-    Assert.assertEquals(0, rowCnt);
-
-  }
-
-  @Test
-  public void testTableIdMap() throws Exception {
-    TableOperations tops = conn.tableOperations();
-    tops.create("foo");
-
-    // Should get a table ID, not the table name
-    Assert.assertNotEquals("foo", tops.tableIdMap().get("foo"));
-  }
-}
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mock/TestBatchScanner821.java b/core/src/test/java/org/apache/accumulo/core/client/mock/TestBatchScanner821.java
deleted file mode 100644
index 4f041c9..0000000
--- a/core/src/test/java/org/apache/accumulo/core/client/mock/TestBatchScanner821.java
+++ /dev/null
@@ -1,77 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mock;
-
-import static org.junit.Assert.assertEquals;
-
-import java.util.Collections;
-import java.util.Map.Entry;
-
-import org.apache.accumulo.core.client.BatchScanner;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.WrappingIterator;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.hadoop.io.Text;
-import org.junit.Test;
-
-@Deprecated
-public class TestBatchScanner821 {
-
-  public static class TransformIterator extends WrappingIterator {
-
-    @Override
-    public Key getTopKey() {
-      Key k = getSource().getTopKey();
-      return new Key(new Text(k.getRow().toString().toLowerCase()), k.getColumnFamily(), k.getColumnQualifier(), k.getColumnVisibility(), k.getTimestamp());
-    }
-  }
-
-  @Test
-  public void test() throws Exception {
-    MockInstance inst = new MockInstance();
-    Connector conn = inst.getConnector("root", new PasswordToken(""));
-    conn.tableOperations().create("test");
-    BatchWriter bw = conn.createBatchWriter("test", new BatchWriterConfig());
-    for (String row : "A,B,C,D".split(",")) {
-      Mutation m = new Mutation(row);
-      m.put("cf", "cq", "");
-      bw.addMutation(m);
-    }
-    bw.flush();
-    BatchScanner bs = conn.createBatchScanner("test", Authorizations.EMPTY, 1);
-    IteratorSetting cfg = new IteratorSetting(100, TransformIterator.class);
-    bs.addScanIterator(cfg);
-    bs.setRanges(Collections.singletonList(new Range("A", "Z")));
-    StringBuilder sb = new StringBuilder();
-    String comma = "";
-    for (Entry<Key,Value> entry : bs) {
-      sb.append(comma);
-      sb.append(entry.getKey().getRow());
-      comma = ",";
-    }
-    assertEquals("a,b,c,d", sb.toString());
-  }
-
-}
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/DefaultConfigurationTest.java b/core/src/test/java/org/apache/accumulo/core/conf/DefaultConfigurationTest.java
index cb6810c..9e4c5ec 100644
--- a/core/src/test/java/org/apache/accumulo/core/conf/DefaultConfigurationTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/conf/DefaultConfigurationTest.java
@@ -23,9 +23,6 @@
 import org.junit.Before;
 import org.junit.Test;
 
-import com.google.common.base.Predicate;
-import com.google.common.base.Predicates;
-
 public class DefaultConfigurationTest {
   private DefaultConfiguration c;
 
@@ -41,9 +38,8 @@
 
   @Test
   public void testGetProperties() {
-    Predicate<String> all = Predicates.alwaysTrue();
     Map<String,String> p = new java.util.HashMap<>();
-    c.getProperties(p, all);
+    c.getProperties(p, x -> true);
     assertEquals(Property.MASTER_CLIENTPORT.getDefaultValue(), p.get(Property.MASTER_CLIENTPORT.getKey()));
   }
 
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/ObservableConfigurationTest.java b/core/src/test/java/org/apache/accumulo/core/conf/ObservableConfigurationTest.java
index 8f73907..e6c7169 100644
--- a/core/src/test/java/org/apache/accumulo/core/conf/ObservableConfigurationTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/conf/ObservableConfigurationTest.java
@@ -24,12 +24,11 @@
 
 import java.util.Collection;
 import java.util.Map;
+import java.util.function.Predicate;
 
 import org.junit.Before;
 import org.junit.Test;
 
-import com.google.common.base.Predicate;
-
 public class ObservableConfigurationTest {
   private static class TestObservableConfig extends ObservableConfiguration {
     @Override
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/PropertyTypeTest.java b/core/src/test/java/org/apache/accumulo/core/conf/PropertyTypeTest.java
index 9852ee8..8a01eb8 100644
--- a/core/src/test/java/org/apache/accumulo/core/conf/PropertyTypeTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/conf/PropertyTypeTest.java
@@ -20,18 +20,17 @@
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
-import java.lang.reflect.Method;
 import java.util.Arrays;
+import java.util.List;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
 
 import org.junit.Before;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.TestName;
 
-import com.google.common.base.Function;
 import com.google.common.base.Joiner;
-import com.google.common.base.Predicate;
-import com.google.common.collect.Iterables;
 
 public class PropertyTypeTest {
 
@@ -65,33 +64,16 @@
   @Test
   public void testFullCoverage() {
     // This test checks the remainder of the methods in this class to ensure each property type has a corresponding test
-    Iterable<String> types = Iterables.transform(Arrays.asList(PropertyType.values()), new Function<PropertyType,String>() {
-      @Override
-      public String apply(final PropertyType input) {
-        return input.name();
-      }
+    Stream<String> types = Arrays.stream(PropertyType.values()).map(v -> v.name());
+
+    List<String> typesTested = Arrays.stream(this.getClass().getMethods()).map(m -> m.getName()).filter(m -> m.startsWith("testType")).map(m -> m.substring(8))
+        .collect(Collectors.toList());
+
+    types = types.map(t -> {
+      assertTrue(PropertyType.class.getSimpleName() + "." + t + " does not have a test.", typesTested.contains(t));
+      return t;
     });
-    Iterable<String> typesTested = Iterables.transform(
-        Iterables.filter(Iterables.transform(Arrays.asList(this.getClass().getMethods()), new Function<Method,String>() {
-          @Override
-          public String apply(final Method input) {
-            return input.getName();
-          }
-        }), new Predicate<String>() {
-          @Override
-          public boolean apply(final String input) {
-            return input.startsWith("testType");
-          }
-        }), new Function<String,String>() {
-          @Override
-          public String apply(final String input) {
-            return input.substring(8);
-          }
-        });
-    for (String t : types) {
-      assertTrue(PropertyType.class.getSimpleName() + "." + t + " does not have a test.", Iterables.contains(typesTested, t));
-    }
-    assertEquals(Iterables.size(types), Iterables.size(typesTested));
+    assertEquals(types.count(), typesTested.size());
   }
 
   private void valid(final String... args) {
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/SiteConfigurationTest.java b/core/src/test/java/org/apache/accumulo/core/conf/SiteConfigurationTest.java
index f89dbfa..896bf8c 100644
--- a/core/src/test/java/org/apache/accumulo/core/conf/SiteConfigurationTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/conf/SiteConfigurationTest.java
@@ -20,6 +20,7 @@
 import java.net.URL;
 import java.util.HashMap;
 import java.util.Map;
+import java.util.function.Predicate;
 
 import org.apache.hadoop.conf.Configuration;
 import org.easymock.EasyMock;
@@ -27,9 +28,6 @@
 import org.junit.BeforeClass;
 import org.junit.Test;
 
-import com.google.common.base.Predicate;
-import com.google.common.base.Predicates;
-
 public class SiteConfigurationTest {
   private static boolean isCredentialProviderAvailable;
 
@@ -67,7 +65,7 @@
     EasyMock.replay(siteCfg);
 
     Map<String,String> props = new HashMap<>();
-    Predicate<String> all = Predicates.alwaysTrue();
+    Predicate<String> all = x -> true;
     siteCfg.getProperties(props, all);
 
     Assert.assertEquals("mysecret", props.get(Property.INSTANCE_SECRET.getKey()));
diff --git a/core/src/test/java/org/apache/accumulo/core/data/KeyExtentTest.java b/core/src/test/java/org/apache/accumulo/core/data/KeyExtentTest.java
index 79968be..0bff486 100644
--- a/core/src/test/java/org/apache/accumulo/core/data/KeyExtentTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/data/KeyExtentTest.java
@@ -16,13 +16,9 @@
  */
 package org.apache.accumulo.core.data;
 
-import static org.hamcrest.CoreMatchers.hasItem;
-import static org.hamcrest.CoreMatchers.hasItems;
-import static org.hamcrest.CoreMatchers.is;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertThat;
 import static org.junit.Assert.assertTrue;
 
 import java.io.ByteArrayInputStream;
@@ -30,9 +26,6 @@
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
 import java.io.IOException;
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.Collections;
 import java.util.SortedMap;
 import java.util.TreeMap;
 import java.util.TreeSet;
@@ -292,57 +285,6 @@
     return out;
   }
 
-  @SuppressWarnings("deprecation")
-  @Test
-  public void testKeyExtentsForSimpleRange() {
-    Collection<KeyExtent> results;
-
-    results = KeyExtent.getKeyExtentsForRange(null, null, null);
-    assertTrue("Non-empty set returned from no extents", results.isEmpty());
-
-    results = KeyExtent.getKeyExtentsForRange(null, null, Collections.<KeyExtent> emptySet());
-    assertTrue("Non-empty set returned from no extents", results.isEmpty());
-
-    KeyExtent t = nke("t", null, null);
-    results = KeyExtent.getKeyExtentsForRange(null, null, Collections.<KeyExtent> singleton(t));
-    assertEquals("Single tablet should always be returned", 1, results.size());
-    assertEquals(t, results.iterator().next());
-  }
-
-  @SuppressWarnings("deprecation")
-  @Test
-  public void testKeyExtentsForRange() {
-    KeyExtent b = nke("t", "b", null);
-    KeyExtent e = nke("t", "e", "b");
-    KeyExtent h = nke("t", "h", "e");
-    KeyExtent m = nke("t", "m", "h");
-    KeyExtent z = nke("t", null, "m");
-
-    set0.addAll(Arrays.asList(b, e, h, m, z));
-
-    Collection<KeyExtent> results;
-
-    results = KeyExtent.getKeyExtentsForRange(null, null, set0);
-    assertThat("infinite range should return full set", results.size(), is(5));
-    assertThat("infinite range should return full set", results, hasItems(b, e, h, m, z));
-
-    results = KeyExtent.getKeyExtentsForRange(new Text("a"), new Text("z"), set0);
-    assertThat("full overlap should return full set", results.size(), is(5));
-    assertThat("full overlap should return full set", results, hasItems(b, e, h, m, z));
-
-    results = KeyExtent.getKeyExtentsForRange(null, new Text("f"), set0);
-    assertThat("end row should return head set", results.size(), is(3));
-    assertThat("end row should return head set", results, hasItems(b, e, h));
-
-    results = KeyExtent.getKeyExtentsForRange(new Text("f"), null, set0);
-    assertThat("start row should return tail set", results.size(), is(3));
-    assertThat("start row should return tail set", results, hasItems(h, m, z));
-
-    results = KeyExtent.getKeyExtentsForRange(new Text("f"), new Text("g"), set0);
-    assertThat("slice should return correct subset", results.size(), is(1));
-    assertThat("slice should return correct subset", results, hasItem(h));
-  }
-
   @Test
   public void testDecodeEncode() {
     assertNull(KeyExtent.decodePrevEndRow(KeyExtent.encodePrevEndRow(null)));
diff --git a/core/src/test/java/org/apache/accumulo/core/data/ValueTest.java b/core/src/test/java/org/apache/accumulo/core/data/ValueTest.java
index 93fab1f..0c0042b 100644
--- a/core/src/test/java/org/apache/accumulo/core/data/ValueTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/data/ValueTest.java
@@ -33,7 +33,6 @@
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
 import java.nio.ByteBuffer;
-import java.util.List;
 
 import org.apache.hadoop.io.Text;
 import org.junit.Before;
@@ -103,13 +102,6 @@
   }
 
   @Test
-  public void testByteBufferCopy() {
-    @SuppressWarnings("deprecation")
-    Value v = new Value(DATABUFF, true);
-    assertArrayEquals(DATA, v.get());
-  }
-
-  @Test
   public void testValueCopy() {
     Value ov = createMock(Value.class);
     expect(ov.get()).andReturn(DATA);
@@ -200,24 +192,6 @@
   }
 
   @Test
-  @Deprecated
-  public void testToArray() {
-    List<byte[]> l = new java.util.ArrayList<>();
-    byte[] one = toBytes("one");
-    byte[] two = toBytes("two");
-    byte[] three = toBytes("three");
-    l.add(one);
-    l.add(two);
-    l.add(three);
-
-    byte[][] a = Value.toArray(l);
-    assertEquals(3, a.length);
-    assertArrayEquals(one, a[0]);
-    assertArrayEquals(two, a[1]);
-    assertArrayEquals(three, a[2]);
-  }
-
-  @Test
   public void testString() {
     Value v1 = new Value("abc");
     Value v2 = new Value("abc".getBytes(UTF_8));
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/AggregatingIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/AggregatingIteratorTest.java
deleted file mode 100644
index 09064a5..0000000
--- a/core/src/test/java/org/apache/accumulo/core/iterators/AggregatingIteratorTest.java
+++ /dev/null
@@ -1,471 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.TreeMap;
-
-import org.apache.accumulo.core.data.ByteSequence;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.system.MultiIterator;
-import org.apache.hadoop.io.Text;
-import org.junit.Test;
-
-public class AggregatingIteratorTest {
-
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
-
-  /**
-   * @deprecated since 1.4; visible only for testing
-   */
-  @Deprecated
-  public static class SummationAggregator implements org.apache.accumulo.core.iterators.aggregation.Aggregator {
-
-    int sum;
-
-    @Override
-    public Value aggregate() {
-      return new Value((sum + "").getBytes());
-    }
-
-    @Override
-    public void collect(Value value) {
-      int val = Integer.parseInt(value.toString());
-
-      sum += val;
-    }
-
-    @Override
-    public void reset() {
-      sum = 0;
-
-    }
-
-  }
-
-  static Key nk(int row, int colf, int colq, long ts, boolean deleted) {
-    Key k = nk(row, colf, colq, ts);
-    k.setDeleted(true);
-    return k;
-  }
-
-  static Key nk(int row, int colf, int colq, long ts) {
-    return new Key(nr(row), new Text(String.format("cf%03d", colf)), new Text(String.format("cq%03d", colq)), ts);
-  }
-
-  static Range nr(int row, int colf, int colq, long ts, boolean inclusive) {
-    return new Range(nk(row, colf, colq, ts), inclusive, null, true);
-  }
-
-  static Range nr(int row, int colf, int colq, long ts) {
-    return nr(row, colf, colq, ts, true);
-  }
-
-  static void nkv(TreeMap<Key,Value> tm, int row, int colf, int colq, long ts, boolean deleted, String val) {
-    Key k = nk(row, colf, colq, ts);
-    k.setDeleted(deleted);
-    tm.put(k, new Value(val.getBytes()));
-  }
-
-  static Text nr(int row) {
-    return new Text(String.format("r%03d", row));
-  }
-
-  @SuppressWarnings("deprecation")
-  @Test
-  public void test1() throws IOException {
-
-    TreeMap<Key,Value> tm1 = new TreeMap<>();
-
-    // keys that do not aggregate
-    nkv(tm1, 1, 1, 1, 1, false, "2");
-    nkv(tm1, 1, 1, 1, 2, false, "3");
-    nkv(tm1, 1, 1, 1, 3, false, "4");
-
-    AggregatingIterator ai = new AggregatingIterator();
-
-    Map<String,String> emptyMap = Collections.emptyMap();
-    ai.init(new SortedMapIterator(tm1), emptyMap, null);
-    ai.seek(new Range(), EMPTY_COL_FAMS, false);
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 3), ai.getTopKey());
-    assertEquals("4", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 2), ai.getTopKey());
-    assertEquals("3", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 1), ai.getTopKey());
-    assertEquals("2", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertFalse(ai.hasTop());
-
-    // try seeking
-
-    ai.seek(nr(1, 1, 1, 2), EMPTY_COL_FAMS, false);
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 2), ai.getTopKey());
-    assertEquals("3", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 1), ai.getTopKey());
-    assertEquals("2", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertFalse(ai.hasTop());
-
-    // seek after everything
-    ai.seek(nr(1, 1, 1, 0), EMPTY_COL_FAMS, false);
-
-    assertFalse(ai.hasTop());
-
-  }
-
-  @SuppressWarnings("deprecation")
-  @Test
-  public void test2() throws IOException {
-    TreeMap<Key,Value> tm1 = new TreeMap<>();
-
-    // keys that aggregate
-    nkv(tm1, 1, 1, 1, 1, false, "2");
-    nkv(tm1, 1, 1, 1, 2, false, "3");
-    nkv(tm1, 1, 1, 1, 3, false, "4");
-
-    AggregatingIterator ai = new AggregatingIterator();
-
-    Map<String,String> opts = new HashMap<>();
-
-    opts.put("cf001", SummationAggregator.class.getName());
-
-    ai.init(new SortedMapIterator(tm1), opts, null);
-    ai.seek(new Range(), EMPTY_COL_FAMS, false);
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 3), ai.getTopKey());
-    assertEquals("9", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertFalse(ai.hasTop());
-
-    // try seeking to the beginning of a key that aggregates
-
-    ai.seek(nr(1, 1, 1, 3), EMPTY_COL_FAMS, false);
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 3), ai.getTopKey());
-    assertEquals("9", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertFalse(ai.hasTop());
-
-    // try seeking the middle of a key the aggregates
-    ai.seek(nr(1, 1, 1, 2), EMPTY_COL_FAMS, false);
-
-    assertFalse(ai.hasTop());
-
-    // try seeking to the end of a key the aggregates
-    ai.seek(nr(1, 1, 1, 1), EMPTY_COL_FAMS, false);
-
-    assertFalse(ai.hasTop());
-
-    // try seeking before a key the aggregates
-    ai.seek(nr(1, 1, 1, 4), EMPTY_COL_FAMS, false);
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 3), ai.getTopKey());
-    assertEquals("9", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertFalse(ai.hasTop());
-  }
-
-  @SuppressWarnings("deprecation")
-  @Test
-  public void test3() throws IOException {
-
-    TreeMap<Key,Value> tm1 = new TreeMap<>();
-
-    // keys that aggregate
-    nkv(tm1, 1, 1, 1, 1, false, "2");
-    nkv(tm1, 1, 1, 1, 2, false, "3");
-    nkv(tm1, 1, 1, 1, 3, false, "4");
-
-    // keys that do not aggregate
-    nkv(tm1, 2, 2, 1, 1, false, "2");
-    nkv(tm1, 2, 2, 1, 2, false, "3");
-
-    AggregatingIterator ai = new AggregatingIterator();
-
-    Map<String,String> opts = new HashMap<>();
-
-    opts.put("cf001", SummationAggregator.class.getName());
-
-    ai.init(new SortedMapIterator(tm1), opts, null);
-    ai.seek(new Range(), EMPTY_COL_FAMS, false);
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 3), ai.getTopKey());
-    assertEquals("9", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(2, 2, 1, 2), ai.getTopKey());
-    assertEquals("3", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(2, 2, 1, 1), ai.getTopKey());
-    assertEquals("2", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertFalse(ai.hasTop());
-
-    // seek after key that aggregates
-    ai.seek(nr(1, 1, 1, 2), EMPTY_COL_FAMS, false);
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(2, 2, 1, 2), ai.getTopKey());
-    assertEquals("3", ai.getTopValue().toString());
-
-    // seek before key that aggregates
-    ai.seek(nr(1, 1, 1, 4), EMPTY_COL_FAMS, false);
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 3), ai.getTopKey());
-    assertEquals("9", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(2, 2, 1, 2), ai.getTopKey());
-    assertEquals("3", ai.getTopValue().toString());
-
-  }
-
-  @SuppressWarnings("deprecation")
-  @Test
-  public void test4() throws IOException {
-
-    TreeMap<Key,Value> tm1 = new TreeMap<>();
-
-    // keys that do not aggregate
-    nkv(tm1, 0, 0, 1, 1, false, "7");
-
-    // keys that aggregate
-    nkv(tm1, 1, 1, 1, 1, false, "2");
-    nkv(tm1, 1, 1, 1, 2, false, "3");
-    nkv(tm1, 1, 1, 1, 3, false, "4");
-
-    // keys that do not aggregate
-    nkv(tm1, 2, 2, 1, 1, false, "2");
-    nkv(tm1, 2, 2, 1, 2, false, "3");
-
-    AggregatingIterator ai = new AggregatingIterator();
-
-    Map<String,String> opts = new HashMap<>();
-
-    opts.put("cf001", SummationAggregator.class.getName());
-
-    ai.init(new SortedMapIterator(tm1), opts, null);
-    ai.seek(new Range(), EMPTY_COL_FAMS, false);
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(0, 0, 1, 1), ai.getTopKey());
-    assertEquals("7", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 3), ai.getTopKey());
-    assertEquals("9", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(2, 2, 1, 2), ai.getTopKey());
-    assertEquals("3", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(2, 2, 1, 1), ai.getTopKey());
-    assertEquals("2", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertFalse(ai.hasTop());
-
-    // seek test
-    ai.seek(nr(0, 0, 1, 0), EMPTY_COL_FAMS, false);
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 3), ai.getTopKey());
-    assertEquals("9", ai.getTopValue().toString());
-
-    ai.next();
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(2, 2, 1, 2), ai.getTopKey());
-    assertEquals("3", ai.getTopValue().toString());
-
-    // seek after key that aggregates
-    ai.seek(nr(1, 1, 1, 2), EMPTY_COL_FAMS, false);
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(2, 2, 1, 2), ai.getTopKey());
-    assertEquals("3", ai.getTopValue().toString());
-
-  }
-
-  @SuppressWarnings("deprecation")
-  @Test
-  public void test5() throws IOException {
-    // try aggregating across multiple data sets that contain
-    // the exact same keys w/ different values
-
-    TreeMap<Key,Value> tm1 = new TreeMap<>();
-    nkv(tm1, 1, 1, 1, 1, false, "2");
-
-    TreeMap<Key,Value> tm2 = new TreeMap<>();
-    nkv(tm2, 1, 1, 1, 1, false, "3");
-
-    TreeMap<Key,Value> tm3 = new TreeMap<>();
-    nkv(tm3, 1, 1, 1, 1, false, "4");
-
-    AggregatingIterator ai = new AggregatingIterator();
-    Map<String,String> opts = new HashMap<>();
-    opts.put("cf001", SummationAggregator.class.getName());
-
-    List<SortedKeyValueIterator<Key,Value>> sources = new ArrayList<>(3);
-    sources.add(new SortedMapIterator(tm1));
-    sources.add(new SortedMapIterator(tm2));
-    sources.add(new SortedMapIterator(tm3));
-
-    MultiIterator mi = new MultiIterator(sources, true);
-    ai.init(mi, opts, null);
-    ai.seek(new Range(), EMPTY_COL_FAMS, false);
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 1), ai.getTopKey());
-    assertEquals("9", ai.getTopValue().toString());
-  }
-
-  @SuppressWarnings("deprecation")
-  @Test
-  public void test6() throws IOException {
-    TreeMap<Key,Value> tm1 = new TreeMap<>();
-
-    // keys that aggregate
-    nkv(tm1, 1, 1, 1, 1, false, "2");
-    nkv(tm1, 1, 1, 1, 2, false, "3");
-    nkv(tm1, 1, 1, 1, 3, false, "4");
-
-    AggregatingIterator ai = new AggregatingIterator();
-
-    Map<String,String> opts = new HashMap<>();
-
-    opts.put("cf001", SummationAggregator.class.getName());
-
-    ai.init(new SortedMapIterator(tm1), opts, new DefaultIteratorEnvironment());
-
-    // try seeking to the beginning of a key that aggregates
-
-    ai.seek(nr(1, 1, 1, 3, false), EMPTY_COL_FAMS, false);
-
-    assertFalse(ai.hasTop());
-
-  }
-
-  @SuppressWarnings("deprecation")
-  @Test
-  public void test7() throws IOException {
-    // test that delete is not aggregated
-
-    TreeMap<Key,Value> tm1 = new TreeMap<>();
-
-    nkv(tm1, 1, 1, 1, 2, true, "");
-    nkv(tm1, 1, 1, 1, 3, false, "4");
-    nkv(tm1, 1, 1, 1, 4, false, "3");
-
-    AggregatingIterator ai = new AggregatingIterator();
-
-    Map<String,String> opts = new HashMap<>();
-
-    opts.put("cf001", SummationAggregator.class.getName());
-
-    ai.init(new SortedMapIterator(tm1), opts, new DefaultIteratorEnvironment());
-
-    ai.seek(nr(1, 1, 1, 4, true), EMPTY_COL_FAMS, false);
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 4), ai.getTopKey());
-    assertEquals("7", ai.getTopValue().toString());
-
-    ai.next();
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 2, true), ai.getTopKey());
-    assertEquals("", ai.getTopValue().toString());
-
-    ai.next();
-    assertFalse(ai.hasTop());
-
-    tm1 = new TreeMap<>();
-    nkv(tm1, 1, 1, 1, 2, true, "");
-    ai = new AggregatingIterator();
-    ai.init(new SortedMapIterator(tm1), opts, new DefaultIteratorEnvironment());
-
-    ai.seek(nr(1, 1, 1, 4, true), EMPTY_COL_FAMS, false);
-
-    assertTrue(ai.hasTop());
-    assertEquals(nk(1, 1, 1, 2, true), ai.getTopKey());
-    assertEquals("", ai.getTopValue().toString());
-
-    ai.next();
-    assertFalse(ai.hasTop());
-
-  }
-}
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/aggregation/NumSummationTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/aggregation/NumSummationTest.java
deleted file mode 100644
index 5a56ead..0000000
--- a/core/src/test/java/org/apache/accumulo/core/iterators/aggregation/NumSummationTest.java
+++ /dev/null
@@ -1,149 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators.aggregation;
-
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
-
-import java.io.IOException;
-
-import org.apache.accumulo.core.data.Value;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * @deprecated since 1.4
- */
-@Deprecated
-public class NumSummationTest {
-
-  private static final Logger log = LoggerFactory.getLogger(NumSummationTest.class);
-
-  public byte[] init(int n) {
-    byte[] b = new byte[n];
-    for (int i = 0; i < b.length; i++)
-      b[i] = 0;
-    return b;
-  }
-
-  @Test
-  public void test1() {
-    try {
-      long[] la = {1l, 2l, 3l};
-      byte[] b = NumArraySummation.longArrayToBytes(la);
-      long[] la2 = NumArraySummation.bytesToLongArray(b);
-
-      assertTrue(la.length == la2.length);
-      for (int i = 0; i < la.length; i++) {
-        assertTrue(i + ": " + la[i] + " does not equal " + la2[i], la[i] == la2[i]);
-      }
-    } catch (Exception e) {
-      assertTrue(false);
-    }
-  }
-
-  @Test
-  public void test2() {
-    try {
-      NumArraySummation nas = new NumArraySummation();
-      long[] la = {1l, 2l, 3l};
-      nas.collect(new Value(NumArraySummation.longArrayToBytes(la)));
-      long[] la2 = {3l, 2l, 1l, 0l};
-      nas.collect(new Value(NumArraySummation.longArrayToBytes(la2)));
-      la = NumArraySummation.bytesToLongArray(nas.aggregate().get());
-      assertTrue(la.length == 4);
-      for (int i = 0; i < la.length - 1; i++) {
-        assertTrue(la[i] == 4);
-      }
-      assertTrue(la[la.length - 1] == 0);
-      nas.reset();
-      la = NumArraySummation.bytesToLongArray(nas.aggregate().get());
-      assertTrue(la.length == 0);
-    } catch (Exception e) {
-      log.error("{}", e.getMessage(), e);
-      assertTrue(false);
-    }
-  }
-
-  @Test
-  public void test3() {
-    try {
-      NumArraySummation nas = new NumArraySummation();
-      long[] la = {Long.MAX_VALUE, Long.MIN_VALUE, 3l, -5l, 5l, 5l};
-      nas.collect(new Value(NumArraySummation.longArrayToBytes(la)));
-      long[] la2 = {1l, -3l, 2l, 10l};
-      nas.collect(new Value(NumArraySummation.longArrayToBytes(la2)));
-      la = NumArraySummation.bytesToLongArray(nas.aggregate().get());
-      assertTrue(la.length == 6);
-      for (int i = 2; i < la.length; i++) {
-        assertTrue(la[i] == 5);
-      }
-      assertTrue("max long plus one was " + la[0], la[0] == Long.MAX_VALUE);
-      assertTrue("min long minus 3 was " + la[1], la[1] == Long.MIN_VALUE);
-    } catch (Exception e) {
-      assertTrue(false);
-    }
-  }
-
-  @Test
-  public void test4() {
-    try {
-      long l = 5l;
-      byte[] b = NumSummation.longToBytes(l);
-      long l2 = NumSummation.bytesToLong(b);
-
-      assertTrue(l == l2);
-    } catch (Exception e) {
-      assertTrue(false);
-    }
-  }
-
-  @Test
-  public void test5() {
-    try {
-      NumSummation ns = new NumSummation();
-      for (long l = -5l; l < 8l; l++) {
-        ns.collect(new Value(NumSummation.longToBytes(l)));
-      }
-      long l = NumSummation.bytesToLong(ns.aggregate().get());
-      assertTrue("l was " + l, l == 13);
-
-      ns.collect(new Value(NumSummation.longToBytes(Long.MAX_VALUE)));
-      l = NumSummation.bytesToLong(ns.aggregate().get());
-      assertTrue("l was " + l, l == Long.MAX_VALUE);
-
-      ns.collect(new Value(NumSummation.longToBytes(Long.MIN_VALUE)));
-      l = NumSummation.bytesToLong(ns.aggregate().get());
-      assertTrue("l was " + l, l == -1);
-
-      ns.collect(new Value(NumSummation.longToBytes(Long.MIN_VALUE)));
-      l = NumSummation.bytesToLong(ns.aggregate().get());
-      assertTrue("l was " + l, l == Long.MIN_VALUE);
-
-      ns.collect(new Value(NumSummation.longToBytes(Long.MIN_VALUE)));
-      l = NumSummation.bytesToLong(ns.aggregate().get());
-      assertTrue("l was " + l, l == Long.MIN_VALUE);
-
-      ns.reset();
-      l = NumSummation.bytesToLong(ns.aggregate().get());
-      assertTrue("l was " + l, l == 0);
-    } catch (IOException | RuntimeException e) {
-      fail();
-    }
-  }
-}
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/aggregation/conf/AggregatorConfigurationTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/aggregation/conf/AggregatorConfigurationTest.java
deleted file mode 100644
index 61693ab..0000000
--- a/core/src/test/java/org/apache/accumulo/core/iterators/aggregation/conf/AggregatorConfigurationTest.java
+++ /dev/null
@@ -1,72 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.iterators.aggregation.conf;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNull;
-
-import org.apache.hadoop.io.Text;
-import org.junit.Test;
-
-public class AggregatorConfigurationTest {
-
-  @Test
-  public void testBinary() {
-    Text colf = new Text();
-    Text colq = new Text();
-
-    for (int i = 0; i < 256; i++) {
-      colf.append(new byte[] {(byte) i}, 0, 1);
-      colq.append(new byte[] {(byte) (255 - i)}, 0, 1);
-    }
-
-    runTest(colf, colq);
-    runTest(colf);
-  }
-
-  @Test
-  public void testBasic() {
-    runTest(new Text("colf1"), new Text("cq2"));
-    runTest(new Text("colf1"));
-  }
-
-  @SuppressWarnings("deprecation")
-  private void runTest(Text colf) {
-    String encodedCols;
-    org.apache.accumulo.core.iterators.conf.PerColumnIteratorConfig ac3 = new org.apache.accumulo.core.iterators.conf.PerColumnIteratorConfig(colf,
-        "com.foo.SuperAgg");
-    encodedCols = ac3.encodeColumns();
-    org.apache.accumulo.core.iterators.conf.PerColumnIteratorConfig ac4 = org.apache.accumulo.core.iterators.conf.PerColumnIteratorConfig.decodeColumns(
-        encodedCols, "com.foo.SuperAgg");
-
-    assertEquals(colf, ac4.getColumnFamily());
-    assertNull(ac4.getColumnQualifier());
-  }
-
-  @SuppressWarnings("deprecation")
-  private void runTest(Text colf, Text colq) {
-    org.apache.accumulo.core.iterators.conf.PerColumnIteratorConfig ac = new org.apache.accumulo.core.iterators.conf.PerColumnIteratorConfig(colf, colq,
-        "com.foo.SuperAgg");
-    String encodedCols = ac.encodeColumns();
-    org.apache.accumulo.core.iterators.conf.PerColumnIteratorConfig ac2 = org.apache.accumulo.core.iterators.conf.PerColumnIteratorConfig.decodeColumns(
-        encodedCols, "com.foo.SuperAgg");
-
-    assertEquals(colf, ac2.getColumnFamily());
-    assertEquals(colq, ac2.getColumnQualifier());
-  }
-
-}
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/FilterTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/FilterTest.java
index e7e2266..6546352 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/FilterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/FilterTest.java
@@ -91,20 +91,20 @@
       Key k = new Key(new Text(String.format("%03d", i)), colf, colq);
       tm.put(k, dv);
     }
-    assertTrue(tm.size() == 1000);
+    assertEquals(1000, tm.size());
 
     Filter filter1 = new SimpleFilter();
     filter1.init(new SortedMapIterator(tm), EMPTY_OPTS, null);
     filter1.seek(new Range(), EMPTY_COL_FAMS, false);
     int size = size(filter1);
-    assertTrue("size = " + size, size == 100);
+    assertEquals(100, size);
 
     Filter fi = new SimpleFilter();
     fi.init(new SortedMapIterator(tm), EMPTY_OPTS, null);
     Key k = new Key(new Text("500"));
     fi.seek(new Range(k, null), EMPTY_COL_FAMS, false);
     size = size(fi);
-    assertTrue("size = " + size, size == 50);
+    assertEquals(50, size);
 
     filter1 = new SimpleFilter();
     filter1.init(new SortedMapIterator(tm), EMPTY_OPTS, null);
@@ -112,7 +112,7 @@
     filter2.init(filter1, EMPTY_OPTS, null);
     filter2.seek(new Range(), EMPTY_COL_FAMS, false);
     size = size(filter2);
-    assertTrue("size = " + size, size == 0);
+    assertEquals(0, size);
   }
 
   @Test
@@ -126,7 +126,7 @@
       Key k = new Key(new Text(String.format("%03d", i)), colf, colq);
       tm.put(k, dv);
     }
-    assertTrue(tm.size() == 1000);
+    assertEquals(1000, tm.size());
 
     Filter filter = new SimpleFilter();
 
@@ -136,20 +136,20 @@
     filter.init(new SortedMapIterator(tm), is.getOptions(), null);
     filter.seek(new Range(), EMPTY_COL_FAMS, false);
     int size = size(filter);
-    assertTrue("size = " + size, size == 900);
+    assertEquals(900, size);
 
     filter.init(new SortedMapIterator(tm), is.getOptions(), null);
     Key k = new Key(new Text("500"));
     filter.seek(new Range(k, null), EMPTY_COL_FAMS, false);
     size = size(filter);
-    assertTrue("size = " + size, size == 450);
+    assertEquals(450, size);
 
     filter.init(new SortedMapIterator(tm), EMPTY_OPTS, null);
     Filter filter2 = new SimpleFilter2();
     filter2.init(filter, is.getOptions(), null);
     filter2.seek(new Range(), EMPTY_COL_FAMS, false);
     size = size(filter2);
-    assertTrue("size = " + size, size == 100);
+    assertEquals(100, size);
   }
 
   @Test
@@ -163,7 +163,7 @@
       Key k = new Key(new Text(String.format("%03d", i)), colf, colq);
       tm.put(k, dv);
     }
-    assertTrue(tm.size() == 1000);
+    assertEquals(1000, tm.size());
 
     SimpleFilter filter = new SimpleFilter();
 
@@ -174,10 +174,10 @@
     SortedKeyValueIterator<Key,Value> copy = filter.deepCopy(null);
     filter.seek(new Range(), EMPTY_COL_FAMS, false);
     int size = size(filter);
-    assertTrue("size = " + size, size == 900);
+    assertEquals(900, size);
     copy.seek(new Range(), EMPTY_COL_FAMS, false);
     size = size(copy);
-    assertTrue("size = " + size, size == 900);
+    assertEquals(900, size);
   }
 
   @Test
@@ -192,7 +192,7 @@
       k.setTimestamp(i);
       tm.put(k, dv);
     }
-    assertTrue(tm.size() == 1000);
+    assertEquals(1000, tm.size());
 
     SortedKeyValueIterator<Key,Value> a = new AgeOffFilter();
     IteratorSetting is = new IteratorSetting(1, AgeOffFilter.class);
@@ -208,9 +208,9 @@
     a = a.deepCopy(null);
     SortedKeyValueIterator<Key,Value> copy = a.deepCopy(null);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 900);
+    assertEquals(900, size(a));
     copy.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(copy), 900);
+    assertEquals(900, size(copy));
   }
 
   @Test
@@ -227,27 +227,27 @@
       Key k = new Key(new Text(String.format("%03d", i)), colf, colq, ts - i);
       tm.put(k, dv);
     }
-    assertTrue(tm.size() == 1000);
+    assertEquals(1000, tm.size());
 
     ColumnAgeOffFilter a = new ColumnAgeOffFilter();
     assertTrue(a.validateOptions(is.getOptions()));
     a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
     a.overrideCurrentTime(ts);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 902);
+    assertEquals(902, size(a));
 
     ColumnAgeOffFilter.addTTL(is, new IteratorSetting.Column("a", "b"), 101l);
     a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
     a.overrideCurrentTime(ts);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 102);
+    assertEquals(102, size(a));
 
     ColumnAgeOffFilter.removeTTL(is, new IteratorSetting.Column("a", "b"));
     a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
     a = (ColumnAgeOffFilter) a.deepCopy(null);
     a.overrideCurrentTime(ts);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 902);
+    assertEquals(902, size(a));
   }
 
   /**
@@ -268,27 +268,27 @@
       Key k = new Key(new Text(String.format("%03d", i)), colf, colq, ts - i);
       tm.put(k, dv);
     }
-    assertTrue(tm.size() == 1000);
+    assertEquals(1000, tm.size());
 
     ColumnAgeOffFilter a = new ColumnAgeOffFilter();
     assertTrue(a.validateOptions(is.getOptions()));
     a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
     a.overrideCurrentTime(ts);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 98);
+    assertEquals(98, size(a));
 
     ColumnAgeOffFilter.addTTL(is, new IteratorSetting.Column("a", "b"), 101l);
     a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
     a.overrideCurrentTime(ts);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 898);
+    assertEquals(898, size(a));
 
     ColumnAgeOffFilter.removeTTL(is, new IteratorSetting.Column("a", "b"));
     a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
     a = (ColumnAgeOffFilter) a.deepCopy(null);
     a.overrideCurrentTime(ts);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 98);
+    assertEquals(98, size(a));
   }
 
   /**
@@ -308,27 +308,27 @@
       Key k = new Key(new Text(String.format("%03d", i)), colf, colq, ts - i);
       tm.put(k, dv);
     }
-    assertTrue(tm.size() == 1000);
+    assertEquals(1000, tm.size());
 
     ColumnAgeOffFilter a = new ColumnAgeOffFilter();
     assertTrue(a.validateOptions(is.getOptions()));
     a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
     a.overrideCurrentTime(ts);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 902);
+    assertEquals(902, size(a));
 
     ColumnAgeOffFilter.addTTL(is, new IteratorSetting.Column("negate", "b"), 101l);
     a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
     a.overrideCurrentTime(ts);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 102);
+    assertEquals(102, size(a));
 
     ColumnAgeOffFilter.removeTTL(is, new IteratorSetting.Column("negate", "b"));
     a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
     a = (ColumnAgeOffFilter) a.deepCopy(null);
     a.overrideCurrentTime(ts);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 902);
+    assertEquals(902, size(a));
   }
 
   @Test
@@ -356,24 +356,24 @@
       k.setTimestamp(157l);
       tm.put(k, dv);
     }
-    assertTrue(tm.size() == 1000);
+    assertEquals(1000, tm.size());
 
     ColumnQualifierFilter a = new ColumnQualifierFilter(new SortedMapIterator(tm), hsc);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 1000);
+    assertEquals(1000, size(a));
 
     hsc = new HashSet<>();
     hsc.add(new Column("a".getBytes(), "b".getBytes(), null));
     a = new ColumnQualifierFilter(new SortedMapIterator(tm), hsc);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
     int size = size(a);
-    assertTrue("size was " + size, size == 500);
+    assertEquals(500, size);
 
     hsc = new HashSet<>();
     a = new ColumnQualifierFilter(new SortedMapIterator(tm), hsc);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
     size = size(a);
-    assertTrue("size was " + size, size == 1000);
+    assertEquals(1000, size);
   }
 
   @Test
@@ -392,12 +392,12 @@
       Key k = new Key(new Text(String.format("%03d", i)), new Text("a"), new Text("b"), new Text(lea[i % 4].getExpression()));
       tm.put(k, dv);
     }
-    assertTrue(tm.size() == 1000);
+    assertEquals(1000, tm.size());
 
     VisibilityFilter a = new VisibilityFilter(new SortedMapIterator(tm), auths, le2.getExpression());
     a.seek(new Range(), EMPTY_COL_FAMS, false);
     int size = size(a);
-    assertTrue("size was " + size, size == 750);
+    assertEquals(750, size);
   }
 
   private ColumnQualifierFilter ncqf(TreeMap<Key,Value> tm, Column... columns) throws IOException {
@@ -423,25 +423,25 @@
     tm.put(new Key(new Text(String.format("%03d", 4)), new Text("b"), new Text("x")), dv);
     tm.put(new Key(new Text(String.format("%03d", 5)), new Text("b"), new Text("y")), dv);
 
-    assertTrue(tm.size() == 5);
+    assertEquals(5, tm.size());
 
     int size = size(ncqf(tm, new Column("c".getBytes(), null, null)));
-    assertTrue(size == 5);
+    assertEquals(5, size);
 
     size = size(ncqf(tm, new Column("a".getBytes(), null, null)));
-    assertTrue(size == 5);
+    assertEquals(5, size);
 
     size = size(ncqf(tm, new Column("a".getBytes(), "x".getBytes(), null)));
-    assertTrue(size == 1);
+    assertEquals(1, size);
 
     size = size(ncqf(tm, new Column("a".getBytes(), "x".getBytes(), null), new Column("b".getBytes(), "x".getBytes(), null)));
-    assertTrue(size == 2);
+    assertEquals(2, size);
 
     size = size(ncqf(tm, new Column("a".getBytes(), "x".getBytes(), null), new Column("b".getBytes(), "y".getBytes(), null)));
-    assertTrue(size == 2);
+    assertEquals(2, size);
 
     size = size(ncqf(tm, new Column("a".getBytes(), "x".getBytes(), null), new Column("b".getBytes(), null, null)));
-    assertTrue(size == 3);
+    assertEquals(3, size);
   }
 
   @Test
@@ -452,13 +452,13 @@
       Key k = new Key(String.format("%03d", i), "a", "b", i % 10 == 0 ? "vis" : "");
       tm.put(k, v);
     }
-    assertTrue(tm.size() == 1000);
+    assertEquals(1000, tm.size());
 
     Filter filter = new ReqVisFilter();
     filter.init(new SortedMapIterator(tm), EMPTY_OPTS, null);
     filter.seek(new Range(), EMPTY_COL_FAMS, false);
     int size = size(filter);
-    assertTrue("size = " + size, size == 100);
+    assertEquals(100, size);
   }
 
   @Test
@@ -473,7 +473,7 @@
       k.setTimestamp(i);
       tm.put(k, dv);
     }
-    assertTrue(tm.size() == 100);
+    assertEquals(100, tm.size());
 
     SimpleDateFormat dateParser = new SimpleDateFormat("yyyyMMddHHmmssz");
     long baseTime = dateParser.parse("19990101000000GMT").getTime();
@@ -483,57 +483,57 @@
       k.setTimestamp(baseTime + (i * 1000));
       tm.put(k, dv);
     }
-    assertTrue(tm.size() == 100);
+    assertEquals(100, tm.size());
     TimestampFilter a = new TimestampFilter();
     IteratorSetting is = new IteratorSetting(1, TimestampFilter.class);
     TimestampFilter.setRange(is, "19990101010011GMT+01:00", "19990101010031GMT+01:00");
     a.init(new SortedMapIterator(tm), is.getOptions(), null);
     a = (TimestampFilter) a.deepCopy(null);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 21);
+    assertEquals(21, size(a));
     TimestampFilter.setRange(is, baseTime + 11000, baseTime + 31000);
     a.init(new SortedMapIterator(tm), is.getOptions(), null);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 21);
+    assertEquals(21, size(a));
 
     TimestampFilter.setEnd(is, "19990101000031GMT", false);
     a.init(new SortedMapIterator(tm), is.getOptions(), null);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 20);
+    assertEquals(20, size(a));
 
     TimestampFilter.setStart(is, "19990101000011GMT", false);
     a.init(new SortedMapIterator(tm), is.getOptions(), null);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 19);
+    assertEquals(19, size(a));
 
     TimestampFilter.setEnd(is, "19990101000031GMT", true);
     a.init(new SortedMapIterator(tm), is.getOptions(), null);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 20);
+    assertEquals(20, size(a));
 
     is.clearOptions();
     TimestampFilter.setStart(is, "19990101000011GMT", true);
     a.init(new SortedMapIterator(tm), is.getOptions(), null);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 89);
+    assertEquals(89, size(a));
 
     TimestampFilter.setStart(is, "19990101000011GMT", false);
     assertTrue(a.validateOptions(is.getOptions()));
     a.init(new SortedMapIterator(tm), is.getOptions(), null);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 88);
+    assertEquals(88, size(a));
 
     is.clearOptions();
     TimestampFilter.setEnd(is, "19990101000031GMT", true);
     a.init(new SortedMapIterator(tm), is.getOptions(), null);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 32);
+    assertEquals(32, size(a));
 
     TimestampFilter.setEnd(is, "19990101000031GMT", false);
     assertTrue(a.validateOptions(is.getOptions()));
     a.init(new SortedMapIterator(tm), is.getOptions(), null);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 31);
+    assertEquals(31, size(a));
 
     TimestampFilter.setEnd(is, 253402300800001l, true);
     a.init(new SortedMapIterator(tm), is.getOptions(), null);
@@ -543,14 +543,14 @@
     assertTrue(a.validateOptions(is.getOptions()));
     a.init(new SortedMapIterator(tm), is.getOptions(), null);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 89);
+    assertEquals(89, size(a));
 
     is.clearOptions();
     is.addOption(TimestampFilter.END, "19990101000031GMT");
     assertTrue(a.validateOptions(is.getOptions()));
     a.init(new SortedMapIterator(tm), is.getOptions(), null);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
-    assertEquals(size(a), 32);
+    assertEquals(32, size(a));
 
     try {
       a.validateOptions(EMPTY_OPTS);
@@ -575,13 +575,13 @@
     k = new Key(new Text("10"), colf, colq);
     tm.put(k, dv);
 
-    assertTrue(tm.size() == 4);
+    assertEquals(4, tm.size());
 
     Filter filter = new SimpleFilter();
     filter.init(new SortedMapIterator(tm), EMPTY_OPTS, null);
     filter.seek(new Range(), EMPTY_COL_FAMS, false);
     int size = size(filter);
-    assertTrue("size = " + size, size == 3);
+    assertEquals(3, size);
 
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/rpc/TTimeoutTransportTest.java b/core/src/test/java/org/apache/accumulo/core/rpc/TTimeoutTransportTest.java
new file mode 100644
index 0000000..cedac9c
--- /dev/null
+++ b/core/src/test/java/org/apache/accumulo/core/rpc/TTimeoutTransportTest.java
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.rpc;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.Socket;
+import java.net.SocketAddress;
+
+import org.junit.Test;
+
+import static org.easymock.EasyMock.createMock;
+import static org.easymock.EasyMock.createMockBuilder;
+import static org.easymock.EasyMock.expect;
+import static org.easymock.EasyMock.expectLastCall;
+import static org.easymock.EasyMock.replay;
+import static org.easymock.EasyMock.verify;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests for {@link TTimeoutTransport}.
+ */
+public class TTimeoutTransportTest {
+
+  void expectedSocketSetup(Socket s) throws IOException {
+    s.setSoLinger(false, 0);
+    expectLastCall().once();
+    s.setTcpNoDelay(true);
+    expectLastCall().once();
+  }
+
+  @Test
+  public void testFailedSocketOpenIsClosed() throws IOException {
+    SocketAddress addr = createMock(SocketAddress.class);
+    Socket s = createMock(Socket.class);
+    TTimeoutTransport timeoutTransport = createMockBuilder(TTimeoutTransport.class).addMockedMethod("openSocketChannel").createMock();
+
+    // Return out mocked socket
+    expect(timeoutTransport.openSocketChannel()).andReturn(s).once();
+
+    // tcpnodelay and solinger
+    expectedSocketSetup(s);
+
+    // Connect to the addr
+    s.connect(addr);
+    expectLastCall().andThrow(new IOException());
+
+    // The socket should be closed after the above IOException
+    s.close();
+
+    replay(addr, s, timeoutTransport);
+
+    try {
+      timeoutTransport.openSocket(addr);
+      fail("Expected to catch IOException but got none");
+    } catch (IOException e) {
+      // Expected
+    }
+
+    verify(addr, s, timeoutTransport);
+  }
+
+  @Test
+  public void testFailedInputStreamClosesSocket() throws IOException {
+    long timeout = 2 * 60 * 1000; // 2 mins
+    SocketAddress addr = createMock(SocketAddress.class);
+    Socket s = createMock(Socket.class);
+    TTimeoutTransport timeoutTransport = createMockBuilder(TTimeoutTransport.class).addMockedMethod("openSocketChannel").addMockedMethod("wrapInputStream")
+        .createMock();
+
+    // Return out mocked socket
+    expect(timeoutTransport.openSocketChannel()).andReturn(s).once();
+
+    // tcpnodelay and solinger
+    expectedSocketSetup(s);
+
+    // Connect to the addr
+    s.connect(addr);
+    expectLastCall().once();
+
+    expect(timeoutTransport.wrapInputStream(s, timeout)).andThrow(new IOException());
+
+    // The socket should be closed after the above IOException
+    s.close();
+
+    replay(addr, s, timeoutTransport);
+
+    try {
+      timeoutTransport.createInternal(addr, timeout);
+      fail("Expected to catch IOException but got none");
+    } catch (IOException e) {
+      // Expected
+    }
+
+    verify(addr, s, timeoutTransport);
+  }
+
+  @Test
+  public void testFailedOutputStreamClosesSocket() throws IOException {
+    long timeout = 2 * 60 * 1000; // 2 mins
+    SocketAddress addr = createMock(SocketAddress.class);
+    Socket s = createMock(Socket.class);
+    InputStream is = createMock(InputStream.class);
+    TTimeoutTransport timeoutTransport = createMockBuilder(TTimeoutTransport.class).addMockedMethod("openSocketChannel").addMockedMethod("wrapInputStream")
+        .addMockedMethod("wrapOutputStream").createMock();
+
+    // Return out mocked socket
+    expect(timeoutTransport.openSocketChannel()).andReturn(s).once();
+
+    // tcpnodelay and solinger
+    expectedSocketSetup(s);
+
+    // Connect to the addr
+    s.connect(addr);
+    expectLastCall().once();
+
+    // Input stream is set up
+    expect(timeoutTransport.wrapInputStream(s, timeout)).andReturn(is);
+    // Output stream fails to be set up
+    expect(timeoutTransport.wrapOutputStream(s, timeout)).andThrow(new IOException());
+
+    // The socket should be closed after the above IOException
+    s.close();
+
+    replay(addr, s, timeoutTransport);
+
+    try {
+      timeoutTransport.createInternal(addr, timeout);
+      fail("Expected to catch IOException but got none");
+    } catch (IOException e) {
+      // Expected
+    }
+
+    verify(addr, s, timeoutTransport);
+  }
+
+}
diff --git a/core/src/test/java/org/apache/accumulo/core/security/CredentialsTest.java b/core/src/test/java/org/apache/accumulo/core/security/CredentialsTest.java
index bd4b1ba..8422c6f 100644
--- a/core/src/test/java/org/apache/accumulo/core/security/CredentialsTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/security/CredentialsTest.java
@@ -25,9 +25,7 @@
 
 import javax.security.auth.DestroyFailedException;
 
-import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.impl.Credentials;
 import org.apache.accumulo.core.client.security.SecurityErrorCode;
@@ -35,7 +33,6 @@
 import org.apache.accumulo.core.client.security.tokens.NullToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.security.thrift.TCredentials;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.easymock.EasyMock;
 import org.junit.Before;
 import org.junit.Rule;
@@ -86,24 +83,6 @@
   }
 
   @Test
-  public void testMockConnector() throws AccumuloException, DestroyFailedException, AccumuloSecurityException {
-    Instance inst = DeprecationUtil.makeMockInstance(test.getMethodName());
-    Connector rootConnector = inst.getConnector("root", new PasswordToken());
-    PasswordToken testToken = new PasswordToken("testPass");
-    rootConnector.securityOperations().createLocalUser("testUser", testToken);
-
-    assertFalse(testToken.isDestroyed());
-    testToken.destroy();
-    assertTrue(testToken.isDestroyed());
-    try {
-      inst.getConnector("testUser", testToken);
-      fail();
-    } catch (AccumuloSecurityException e) {
-      assertTrue(e.getSecurityErrorCode().equals(SecurityErrorCode.TOKEN_EXPIRED));
-    }
-  }
-
-  @Test
   public void testEqualsAndHashCode() {
     Credentials nullNullCreds = new Credentials(null, null);
     Credentials abcNullCreds = new Credentials("abc", new NullToken());
diff --git a/core/src/test/java/org/apache/accumulo/core/util/ValidatorTest.java b/core/src/test/java/org/apache/accumulo/core/util/ValidatorTest.java
index 01bad35..e6f360f 100644
--- a/core/src/test/java/org/apache/accumulo/core/util/ValidatorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/util/ValidatorTest.java
@@ -32,7 +32,7 @@
     }
 
     @Override
-    public boolean apply(String argument) {
+    public boolean test(String argument) {
       return s.equals(argument);
     }
   }
@@ -45,7 +45,7 @@
     }
 
     @Override
-    public boolean apply(String argument) {
+    public boolean test(String argument) {
       return (argument != null && argument.matches(ps));
     }
   }
@@ -77,24 +77,24 @@
   @Test
   public void testAnd() {
     Validator<String> vand = v3.and(v);
-    assertTrue(vand.apply("correct"));
-    assertFalse(vand.apply("righto"));
-    assertFalse(vand.apply("coriander"));
+    assertTrue(vand.test("correct"));
+    assertFalse(vand.test("righto"));
+    assertFalse(vand.test("coriander"));
   }
 
   @Test
   public void testOr() {
     Validator<String> vor = v.or(v2);
-    assertTrue(vor.apply("correct"));
-    assertTrue(vor.apply("righto"));
-    assertFalse(vor.apply("coriander"));
+    assertTrue(vor.test("correct"));
+    assertTrue(vor.test("righto"));
+    assertFalse(vor.test("coriander"));
   }
 
   @Test
   public void testNot() {
     Validator<String> vnot = v3.not();
-    assertFalse(vnot.apply("correct"));
-    assertFalse(vnot.apply("coriander"));
-    assertTrue(vnot.apply("righto"));
+    assertFalse(vnot.test("correct"));
+    assertFalse(vnot.test("coriander"));
+    assertTrue(vnot.test("righto"));
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/util/format/DateStringFormatterTest.java b/core/src/test/java/org/apache/accumulo/core/util/format/DateStringFormatterTest.java
deleted file mode 100644
index 22af5b0..0000000
--- a/core/src/test/java/org/apache/accumulo/core/util/format/DateStringFormatterTest.java
+++ /dev/null
@@ -1,80 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.util.format;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-
-import java.util.Map;
-import java.util.TimeZone;
-import java.util.TreeMap;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Value;
-import org.junit.Before;
-import org.junit.Test;
-
-@SuppressWarnings("deprecation")
-public class DateStringFormatterTest {
-  DateStringFormatter formatter;
-
-  Map<Key,Value> data;
-
-  @Before
-  public void setUp() {
-    formatter = new DateStringFormatter();
-    data = new TreeMap<>();
-    data.put(new Key("", "", "", 0), new Value());
-  }
-
-  private void testFormatterIgnoresConfig(FormatterConfig config, DateStringFormatter formatter) {
-    // ignores config's DateFormatSupplier and substitutes its own
-    formatter.initialize(data.entrySet(), config);
-
-    assertTrue(formatter.hasNext());
-    final String next = formatter.next();
-    assertTrue(next, next.endsWith("1970/01/01 00:00:00.000"));
-  }
-
-  @Test
-  public void testTimestamps() {
-    final TimeZone utc = TimeZone.getTimeZone("UTC");
-    final TimeZone est = TimeZone.getTimeZone("EST");
-    final FormatterConfig config = new FormatterConfig().setPrintTimestamps(true);
-    DateStringFormatter formatter;
-
-    formatter = new DateStringFormatter(utc);
-    testFormatterIgnoresConfig(config, formatter);
-
-    // even though config says to use EST and only print year, the Formatter will override these
-    formatter = new DateStringFormatter(utc);
-    DateFormatSupplier dfSupplier = DateFormatSupplier.createSimpleFormatSupplier("YYYY", est);
-    config.setDateFormatSupplier(dfSupplier);
-    testFormatterIgnoresConfig(config, formatter);
-  }
-
-  @Test
-  public void testNoTimestamps() {
-    data.put(new Key("", "", "", 1), new Value());
-
-    assertEquals(2, data.size());
-
-    formatter.initialize(data.entrySet(), new FormatterConfig());
-
-    assertEquals(formatter.next(), formatter.next());
-  }
-
-}
diff --git a/docs/pom.xml b/docs/pom.xml
index 853474b..97404a8 100644
--- a/docs/pom.xml
+++ b/docs/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-docs</artifactId>
   <packaging>pom</packaging>
diff --git a/examples/simple/pom.xml b/examples/simple/pom.xml
index b15d774..a9e6b7b 100644
--- a/examples/simple/pom.xml
+++ b/examples/simple/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-examples-simple</artifactId>
@@ -41,10 +41,6 @@
       <artifactId>commons-cli</artifactId>
     </dependency>
     <dependency>
-      <groupId>commons-codec</groupId>
-      <artifactId>commons-codec</artifactId>
-    </dependency>
-    <dependency>
       <groupId>commons-configuration</groupId>
       <artifactId>commons-configuration</artifactId>
     </dependency>
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
index d27758e..fab2532 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
@@ -17,6 +17,7 @@
 package org.apache.accumulo.examples.simple.mapreduce;
 
 import java.io.IOException;
+import java.util.Base64;
 import java.util.Collections;
 
 import org.apache.accumulo.core.cli.MapReduceClientOnRequiredTable;
@@ -25,7 +26,6 @@
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
@@ -46,7 +46,7 @@
     @Override
     public void map(Key row, Value data, Context context) throws IOException, InterruptedException {
       Mutation m = new Mutation(row.getRow());
-      m.put(new Text("cf-HASHTYPE"), new Text("cq-MD5BASE64"), new Value(Base64.encodeBase64(MD5Hash.digest(data.toString()).getDigest())));
+      m.put(new Text("cf-HASHTYPE"), new Text("cq-MD5BASE64"), new Value(Base64.getEncoder().encode(MD5Hash.digest(data.toString()).getDigest())));
       context.write(null, m);
       context.progress();
     }
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/BulkIngestExample.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/BulkIngestExample.java
index fbe92b5..42ec5ea 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/BulkIngestExample.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/BulkIngestExample.java
@@ -19,6 +19,7 @@
 import java.io.BufferedOutputStream;
 import java.io.IOException;
 import java.io.PrintStream;
+import java.util.Base64;
 import java.util.Collection;
 
 import org.apache.accumulo.core.cli.MapReduceClientOnRequiredTable;
@@ -27,7 +28,6 @@
 import org.apache.accumulo.core.client.mapreduce.lib.partition.RangePartitioner;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.util.TextUtil;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
@@ -133,7 +133,7 @@
 
       Collection<Text> splits = connector.tableOperations().listSplits(opts.getTableName(), 100);
       for (Text split : splits)
-        out.println(Base64.encodeBase64String(TextUtil.getBytes(split)));
+        out.println(Base64.getEncoder().encodeToString(TextUtil.getBytes(split)));
 
       job.setNumReduceTasks(splits.size() + 1);
       out.close();
diff --git a/fate/pom.xml b/fate/pom.xml
index 059e4f1..bc665dc 100644
--- a/fate/pom.xml
+++ b/fate/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-fate</artifactId>
   <name>Apache Accumulo Fate</name>
diff --git a/iterator-test-harness/pom.xml b/iterator-test-harness/pom.xml
index d54a086..3a9c6f6 100644
--- a/iterator-test-harness/pom.xml
+++ b/iterator-test-harness/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-iterator-test-harness</artifactId>
   <name>Apache Accumulo Iterator Test Harness</name>
diff --git a/maven-plugin/pom.xml b/maven-plugin/pom.xml
index 26ca6bc..0c9760e 100644
--- a/maven-plugin/pom.xml
+++ b/maven-plugin/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-maven-plugin</artifactId>
   <packaging>maven-plugin</packaging>
diff --git a/maven-plugin/src/it/plugin-test/pom.xml b/maven-plugin/src/it/plugin-test/pom.xml
index f114fa4..0cc0075 100644
--- a/maven-plugin/src/it/plugin-test/pom.xml
+++ b/maven-plugin/src/it/plugin-test/pom.xml
@@ -36,10 +36,6 @@
       <artifactId>commons-cli</artifactId>
     </dependency>
     <dependency>
-      <groupId>commons-codec</groupId>
-      <artifactId>commons-codec</artifactId>
-    </dependency>
-    <dependency>
       <groupId>commons-collections</groupId>
       <artifactId>commons-collections</artifactId>
     </dependency>
diff --git a/minicluster/pom.xml b/minicluster/pom.xml
index 03113a4..84f81db 100644
--- a/minicluster/pom.xml
+++ b/minicluster/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-minicluster</artifactId>
   <name>Apache Accumulo MiniCluster</name>
diff --git a/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneAccumuloCluster.java b/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneAccumuloCluster.java
index 1baa3a1..47ba1c9 100644
--- a/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneAccumuloCluster.java
+++ b/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneAccumuloCluster.java
@@ -38,15 +38,11 @@
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
 /**
  * AccumuloCluster implementation to connect to an existing deployment of Accumulo
  */
 public class StandaloneAccumuloCluster implements AccumuloCluster {
-  @SuppressWarnings("unused")
-  private static final Logger log = LoggerFactory.getLogger(StandaloneAccumuloCluster.class);
 
   static final List<ServerType> ALL_SERVER_TYPES = Collections.unmodifiableList(Arrays.asList(ServerType.MASTER, ServerType.TABLET_SERVER, ServerType.TRACER,
       ServerType.GARBAGE_COLLECTOR, ServerType.MONITOR));
diff --git a/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java b/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
index 3e66acf..1e5a4f9 100644
--- a/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
+++ b/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.minicluster.impl;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.io.BufferedReader;
@@ -32,6 +33,7 @@
 import java.net.URISyntaxException;
 import java.net.URL;
 import java.net.URLClassLoader;
+import java.nio.file.Files;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
@@ -109,9 +111,7 @@
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Joiner;
-import com.google.common.base.Predicate;
 import com.google.common.collect.Maps;
-import com.google.common.util.concurrent.Uninterruptibles;
 
 /**
  * This class provides the backing implementation for {@link MiniAccumuloCluster}, and may contain features for internal testing which have not yet been
@@ -329,6 +329,8 @@
     if (config.getHadoopConfDir() != null)
       builder.environment().put("HADOOP_CONF_DIR", config.getHadoopConfDir().getAbsolutePath());
 
+    log.info("Starting MiniAccumuloCluster process with class: " + clazz.getSimpleName() + "\n, jvmOpts: " + extraJvmOpts + "\n, classpath: " + classpath
+        + "\n, args: " + argList + "\n, environment: " + builder.environment().toString());
     Process process = builder.start();
 
     LogWriter lw;
@@ -346,7 +348,7 @@
     List<String> jvmOpts = new ArrayList<>();
     jvmOpts.add("-Xmx" + config.getMemory(serverType));
     if (configOverrides != null && !configOverrides.isEmpty()) {
-      File siteFile = File.createTempFile("accumulo-site", ".xml", config.getConfDir());
+      File siteFile = Files.createTempFile(config.getConfDir().toPath(), "accumulo-site", ".xml").toFile();
       Map<String,String> confMap = new HashMap<>();
       confMap.putAll(config.getSiteConfig());
       confMap.putAll(configOverrides);
@@ -378,9 +380,11 @@
    * @param config
    *          initial configuration
    */
-  @SuppressWarnings("deprecation")
   public MiniAccumuloClusterImpl(MiniAccumuloConfigImpl config) throws IOException {
-
+    @SuppressWarnings("deprecation")
+    Property INSTANCE_DFS_DIR = Property.INSTANCE_DFS_DIR;
+    @SuppressWarnings("deprecation")
+    Property INSTANCE_DFS_URI = Property.INSTANCE_DFS_URI;
     this.config = config.initialize();
 
     mkdirs(config.getConfDir());
@@ -425,8 +429,8 @@
       writeConfig(hdfsFile, conf);
 
       Map<String,String> siteConfig = config.getSiteConfig();
-      siteConfig.put(Property.INSTANCE_DFS_URI.getKey(), dfsUri);
-      siteConfig.put(Property.INSTANCE_DFS_DIR.getKey(), "/accumulo");
+      siteConfig.put(INSTANCE_DFS_URI.getKey(), dfsUri);
+      siteConfig.put(INSTANCE_DFS_DIR.getKey(), "/accumulo");
       config.setSiteConfig(siteConfig);
     } else if (config.useExistingInstance()) {
       dfsUri = CachedConfiguration.getInstance().get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY);
@@ -436,12 +440,8 @@
 
     File clientConfFile = config.getClientConfFile();
     // Write only the properties that correspond to ClientConfiguration properties
-    writeConfigProperties(clientConfFile, Maps.filterEntries(config.getSiteConfig(), new Predicate<Entry<String,String>>() {
-      @Override
-      public boolean apply(Entry<String,String> v) {
-        return ClientConfiguration.ClientProperty.getPropertyByKey(v.getKey()) != null;
-      }
-    }));
+    writeConfigProperties(clientConfFile,
+        Maps.filterEntries(config.getSiteConfig(), v -> ClientConfiguration.ClientProperty.getPropertyByKey(v.getKey()) != null));
 
     File siteFile = new File(config.getConfDir(), "accumulo-site.xml");
     writeConfig(siteFile, config.getSiteConfig().entrySet());
@@ -629,7 +629,7 @@
       ret = exec(Main.class, SetGoalState.class.getName(), MasterGoalState.NORMAL.toString()).waitFor();
       if (ret == 0)
         break;
-      Uninterruptibles.sleepUninterruptibly(1, TimeUnit.SECONDS);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
     }
     if (ret != 0) {
       throw new RuntimeException("Could not set master goal state, process returned " + ret + ". Check the logs in " + config.getLogDir() + " for errors.");
diff --git a/minicluster/src/test/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImplTest.java b/minicluster/src/test/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImplTest.java
index ba12f53..46cf6e3 100644
--- a/minicluster/src/test/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImplTest.java
+++ b/minicluster/src/test/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImplTest.java
@@ -34,6 +34,9 @@
 
 public class MiniAccumuloConfigImplTest {
 
+  @SuppressWarnings("deprecation")
+  private static final Property INSTANCE_DFS_URI = Property.INSTANCE_DFS_URI;
+
   static TemporaryFolder tempFolder = new TemporaryFolder(new File(System.getProperty("user.dir") + "/target"));
 
   @BeforeClass
@@ -61,15 +64,14 @@
     assertEquals(5000, config.getZooKeeperStartupTime());
   }
 
-  @SuppressWarnings("deprecation")
   @Test
   public void testSiteConfig() {
 
     // constructor site config overrides default props
     Map<String,String> siteConfig = new HashMap<>();
-    siteConfig.put(Property.INSTANCE_DFS_URI.getKey(), "hdfs://");
+    siteConfig.put(INSTANCE_DFS_URI.getKey(), "hdfs://");
     MiniAccumuloConfigImpl config = new MiniAccumuloConfigImpl(tempFolder.getRoot(), "password").setSiteConfig(siteConfig).initialize();
-    assertEquals("hdfs://", config.getSiteConfig().get(Property.INSTANCE_DFS_URI.getKey()));
+    assertEquals("hdfs://", config.getSiteConfig().get(INSTANCE_DFS_URI.getKey()));
   }
 
   @Test
diff --git a/pom.xml b/pom.xml
index 77e5597..25dfd46 100644
--- a/pom.xml
+++ b/pom.xml
@@ -24,7 +24,7 @@
   </parent>
   <groupId>org.apache.accumulo</groupId>
   <artifactId>accumulo-project</artifactId>
-  <version>1.8.0-SNAPSHOT</version>
+  <version>2.0.0-SNAPSHOT</version>
   <packaging>pom</packaging>
   <name>Apache Accumulo Project</name>
   <description>Apache Accumulo is a sorted, distributed key/value store based
@@ -99,7 +99,6 @@
     <module>shell</module>
     <module>start</module>
     <module>test</module>
-    <module>trace</module>
   </modules>
   <scm>
     <connection>scm:git:git://git.apache.org/accumulo.git</connection>
@@ -129,7 +128,6 @@
     <eclipseFormatterStyle>${project.parent.basedir}/contrib/Eclipse-Accumulo-Codestyle.xml</eclipseFormatterStyle>
     <!-- extra release args for testing -->
     <extraReleaseArgs />
-    <!-- findbugs-maven-plugin won't work on jdk8 or later; set to 3.0.0 or newer -->
     <findbugs.version>3.0.3</findbugs.version>
     <!-- surefire/failsafe plugin option -->
     <forkCount>1</forkCount>
@@ -139,8 +137,8 @@
     <it.failIfNoSpecifiedTests>false</it.failIfNoSpecifiedTests>
     <!-- jetty 9.2 is the last version to support jdk less than 1.8 -->
     <jetty.version>9.2.17.v20160517</jetty.version>
-    <maven.compiler.source>1.7</maven.compiler.source>
-    <maven.compiler.target>1.7</maven.compiler.target>
+    <maven.compiler.source>1.8</maven.compiler.source>
+    <maven.compiler.target>1.8</maven.compiler.target>
     <!-- the maven-release-plugin makes this recommendation, due to plugin bugs -->
     <maven.min-version>3.0.5</maven.min-version>
     <!-- surefire/failsafe plugin option -->
@@ -341,11 +339,6 @@
       </dependency>
       <dependency>
         <groupId>org.apache.accumulo</groupId>
-        <artifactId>accumulo-trace</artifactId>
-        <version>${project.version}</version>
-      </dependency>
-      <dependency>
-        <groupId>org.apache.accumulo</groupId>
         <artifactId>accumulo-tracer</artifactId>
         <version>${project.version}</version>
       </dependency>
@@ -594,7 +587,7 @@
         <plugin>
           <groupId>com.github.ekryd.sortpom</groupId>
           <artifactId>sortpom-maven-plugin</artifactId>
-          <version>2.4.0</version>
+          <version>2.5.0</version>
           <configuration>
             <predefinedSortOrder>recommended_2008_06</predefinedSortOrder>
             <createBackupFile>false</createBackupFile>
@@ -622,7 +615,37 @@
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-shade-plugin</artifactId>
-          <version>2.3</version>
+          <version>2.4.3</version>
+        </plugin>
+        <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-invoker-plugin</artifactId>
+          <!-- overridden version from ASF-17 parent pom -->
+          <version>2.0.0</version>
+        </plugin>
+        <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-source-plugin</artifactId>
+          <!-- overridden version from ASF-17 parent pom -->
+          <version>3.0.0</version>
+        </plugin>
+        <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-dependency-plugin</artifactId>
+          <!-- overridden version from ASF-17 parent pom -->
+          <version>2.10</version>
+        </plugin>
+        <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-gpg-plugin</artifactId>
+          <!-- overridden version from ASF-17 parent pom -->
+          <version>1.6</version>
+        </plugin>
+        <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-scm-plugin</artifactId>
+          <!-- overridden version from ASF-17 parent pom -->
+          <version>1.9.4</version>
         </plugin>
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
@@ -641,6 +664,8 @@
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-clean-plugin</artifactId>
+          <!-- overridden version from ASF-17 parent pom -->
+          <version>3.0.0</version>
           <configuration>
             <filesets>
               <fileset>
@@ -656,6 +681,8 @@
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-compiler-plugin</artifactId>
+          <!-- overridden version from ASF-17 parent pom -->
+          <version>3.5.1</version>
           <configuration>
             <optimize>true</optimize>
             <showDeprecation>true</showDeprecation>
@@ -671,6 +698,8 @@
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-jar-plugin</artifactId>
+          <!-- overridden version from ASF-17 parent pom -->
+          <version>2.6</version>
           <configuration>
             <archive>
               <manifestEntries>
@@ -683,15 +712,20 @@
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-javadoc-plugin</artifactId>
+          <!-- overridden version from ASF-17 parent pom -->
+          <version>2.10.3</version>
           <configuration>
             <quiet>true</quiet>
             <javadocVersion>${maven.compiler.target}</javadocVersion>
             <additionalJOption>-J-Xmx512m</additionalJOption>
+            <additionalparam>-Xdoclint:all,-Xdoclint:-missing</additionalparam>
           </configuration>
         </plugin>
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-release-plugin</artifactId>
+          <!-- overridden version from ASF-17 parent pom -->
+          <version>2.5.3</version>
           <configuration>
             <arguments>-P !autoformat,thrift,sunny -Dtimeout.factor=2 ${extraReleaseArgs}</arguments>
             <autoVersionSubmodules>true</autoVersionSubmodules>
@@ -707,6 +741,8 @@
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-site-plugin</artifactId>
+          <!-- overridden version from ASF-17 parent pom -->
+          <version>3.5.1</version>
           <configuration>
             <skipDeploy>true</skipDeploy>
           </configuration>
@@ -714,6 +750,8 @@
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-surefire-plugin</artifactId>
+          <!-- overridden version from ASF-17 parent pom -->
+          <version>2.19.1</version>
           <configuration>
             <systemPropertyVariables>
               <java.io.tmpdir>${project.build.directory}</java.io.tmpdir>
@@ -724,6 +762,8 @@
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-failsafe-plugin</artifactId>
+          <!-- overridden version from ASF-17 parent pom -->
+          <version>2.19.1</version>
           <configuration>
             <systemPropertyVariables>
               <java.io.tmpdir>${project.build.directory}</java.io.tmpdir>
@@ -733,12 +773,12 @@
         <plugin>
           <groupId>org.asciidoctor</groupId>
           <artifactId>asciidoctor-maven-plugin</artifactId>
-          <version>1.5.2</version>
+          <version>1.5.3</version>
         </plugin>
         <plugin>
           <groupId>org.codehaus.mojo</groupId>
           <artifactId>build-helper-maven-plugin</artifactId>
-          <version>1.9.1</version>
+          <version>1.10</version>
         </plugin>
         <plugin>
           <groupId>org.codehaus.mojo</groupId>
@@ -765,6 +805,8 @@
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-enforcer-plugin</artifactId>
+          <!-- overridden version from ASF-17 parent pom -->
+          <version>1.4.1</version>
           <configuration>
             <rules>
               <requireJavaVersion>
@@ -787,7 +829,7 @@
           <!-- Allows us to get the apache-ds bundle artifacts -->
           <groupId>org.apache.felix</groupId>
           <artifactId>maven-bundle-plugin</artifactId>
-          <version>2.5.3</version>
+          <version>3.0.1</version>
         </plugin>
         <plugin>
           <groupId>net.revelc.code</groupId>
@@ -876,7 +918,7 @@
                 <checkSignatureRule implementation="org.codehaus.mojo.animal_sniffer.enforcer.CheckSignatureRule">
                   <signature>
                     <groupId>org.codehaus.mojo.signature</groupId>
-                    <artifactId>java17</artifactId>
+                    <artifactId>java18</artifactId>
                     <version>1.0</version>
                   </signature>
                 </checkSignatureRule>
@@ -997,7 +1039,7 @@
           <dependency>
             <groupId>com.puppycrawl.tools</groupId>
             <artifactId>checkstyle</artifactId>
-            <version>6.14.1</version>
+            <version>6.18</version>
           </dependency>
         </dependencies>
         <executions>
@@ -1344,28 +1386,6 @@
       </properties>
     </profile>
     <profile>
-      <id>jdk8</id>
-      <activation>
-        <jdk>[1.8,1.9)</jdk>
-      </activation>
-      <build>
-        <pluginManagement>
-          <plugins>
-            <plugin>
-              <groupId>org.apache.maven.plugins</groupId>
-              <artifactId>maven-javadoc-plugin</artifactId>
-              <configuration>
-                <quiet>true</quiet>
-                <javadocVersion>1.8</javadocVersion>
-                <additionalJOption>-J-Xmx512m</additionalJOption>
-                <additionalparam>-Xdoclint:all,-Xdoclint:-missing</additionalparam>
-              </configuration>
-            </plugin>
-          </plugins>
-        </pluginManagement>
-      </build>
-    </profile>
-    <profile>
       <id>performanceTests</id>
       <build>
         <pluginManagement>
diff --git a/proxy/pom.xml b/proxy/pom.xml
index 2aee90b..6bb4eb2 100644
--- a/proxy/pom.xml
+++ b/proxy/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-proxy</artifactId>
   <name>Apache Accumulo Proxy</name>
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/ProxyServer.java b/proxy/src/main/java/org/apache/accumulo/proxy/ProxyServer.java
index a62e1a1..c4b422c 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/ProxyServer.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/ProxyServer.java
@@ -85,7 +85,6 @@
 import org.apache.accumulo.core.security.SystemPermission;
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.core.util.ByteBufferUtil;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.TextUtil;
 import org.apache.accumulo.proxy.thrift.AccumuloProxy;
 import org.apache.accumulo.proxy.thrift.BatchScanOptions;
@@ -190,23 +189,18 @@
 
   public ProxyServer(Properties props) {
 
-    String useMock = props.getProperty("useMockInstance");
-    if (useMock != null && Boolean.parseBoolean(useMock))
-      instance = DeprecationUtil.makeMockInstance(this.getClass().getName());
-    else {
-      ClientConfiguration clientConf;
-      if (props.containsKey("clientConfigurationFile")) {
-        String clientConfFile = props.getProperty("clientConfigurationFile");
-        try {
-          clientConf = new ClientConfiguration(clientConfFile);
-        } catch (ConfigurationException e) {
-          throw new RuntimeException(e);
-        }
-      } else {
-        clientConf = ClientConfiguration.loadDefault();
+    ClientConfiguration clientConf;
+    if (props.containsKey("clientConfigurationFile")) {
+      String clientConfFile = props.getProperty("clientConfigurationFile");
+      try {
+        clientConf = new ClientConfiguration(clientConfFile);
+      } catch (ConfigurationException e) {
+        throw new RuntimeException(e);
       }
-      instance = new ZooKeeperInstance(clientConf.withInstance(props.getProperty("instance")).withZkHosts(props.getProperty("zookeepers")));
+    } else {
+      clientConf = ClientConfiguration.loadDefault();
     }
+    instance = new ZooKeeperInstance(clientConf.withInstance(props.getProperty("instance")).withZkHosts(props.getProperty("zookeepers")));
 
     try {
       String tokenProp = props.getProperty("tokenClass", PasswordToken.class.getName());
@@ -1472,8 +1466,8 @@
       Set<String> propertiesToExclude) throws org.apache.accumulo.proxy.thrift.AccumuloException, org.apache.accumulo.proxy.thrift.AccumuloSecurityException,
       org.apache.accumulo.proxy.thrift.TableNotFoundException, org.apache.accumulo.proxy.thrift.TableExistsException, TException {
     try {
-      propertiesToExclude = propertiesToExclude == null ? new HashSet<String>() : propertiesToExclude;
-      propertiesToSet = propertiesToSet == null ? new HashMap<String,String>() : propertiesToSet;
+      propertiesToExclude = propertiesToExclude == null ? new HashSet<>() : propertiesToExclude;
+      propertiesToSet = propertiesToSet == null ? new HashMap<>() : propertiesToSet;
 
       getConnector(login).tableOperations().clone(tableName, newTableName, flush, propertiesToSet, propertiesToExclude);
     } catch (Exception e) {
diff --git a/server/base/pom.xml b/server/base/pom.xml
index 00e0d14..0bb1efd 100644
--- a/server/base/pom.xml
+++ b/server/base/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-server-base</artifactId>
diff --git a/server/base/src/main/java/org/apache/accumulo/server/AccumuloServerContext.java b/server/base/src/main/java/org/apache/accumulo/server/AccumuloServerContext.java
index ce7bfad..1a61707 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/AccumuloServerContext.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/AccumuloServerContext.java
@@ -28,11 +28,9 @@
 import org.apache.accumulo.core.client.impl.ClientContext;
 import org.apache.accumulo.core.client.impl.ConnectorImpl;
 import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.rpc.SslConnectionParams;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.server.client.HdfsZooInstance;
 import org.apache.accumulo.server.conf.ServerConfigurationFactory;
 import org.apache.accumulo.server.rpc.SaslServerConnectionParams;
@@ -94,9 +92,6 @@
    * Get the credentials to use for this instance so it can be passed to the superclass during construction.
    */
   private static Credentials getCredentials(Instance instance) {
-    if (DeprecationUtil.isMockInstance(instance)) {
-      return new Credentials("mockSystemUser", new PasswordToken("mockSystemPassword"));
-    }
     return SystemCredentials.get(instance);
   }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnDefaultTable.java b/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnDefaultTable.java
index a058660..7af978b 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnDefaultTable.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnDefaultTable.java
@@ -18,7 +18,6 @@
 
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.ZooKeeperInstance;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.server.client.HdfsZooInstance;
 
 public class ClientOnDefaultTable extends org.apache.accumulo.core.cli.ClientOnDefaultTable {
@@ -31,8 +30,6 @@
     if (cachedInstance != null)
       return cachedInstance;
 
-    if (mock)
-      return cachedInstance = DeprecationUtil.makeMockInstance(instance);
     if (instance == null) {
       return cachedInstance = HdfsZooInstance.getInstance();
     }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnRequiredTable.java b/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnRequiredTable.java
index e02dd93..c966723 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnRequiredTable.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnRequiredTable.java
@@ -18,7 +18,6 @@
 
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.ZooKeeperInstance;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.server.client.HdfsZooInstance;
 
 public class ClientOnRequiredTable extends org.apache.accumulo.core.cli.ClientOnRequiredTable {
@@ -31,8 +30,6 @@
     if (cachedInstance != null)
       return cachedInstance;
 
-    if (mock)
-      return cachedInstance = DeprecationUtil.makeMockInstance(instance);
     if (instance == null) {
       return cachedInstance = HdfsZooInstance.getInstance();
     }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOpts.java b/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOpts.java
index c91471e..81a42f8 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOpts.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOpts.java
@@ -18,7 +18,6 @@
 
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.ZooKeeperInstance;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.server.client.HdfsZooInstance;
 
 public class ClientOpts extends org.apache.accumulo.core.cli.ClientOpts {
@@ -29,8 +28,6 @@
 
   @Override
   public Instance getInstance() {
-    if (mock)
-      return DeprecationUtil.makeMockInstance(instance);
     if (instance == null) {
       return HdfsZooInstance.getInstance();
     }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/client/HdfsZooInstance.java b/server/base/src/main/java/org/apache/accumulo/server/client/HdfsZooInstance.java
index e4e73d2..bca8ddf 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/client/HdfsZooInstance.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/client/HdfsZooInstance.java
@@ -19,7 +19,6 @@
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.io.IOException;
-import java.nio.ByteBuffer;
 import java.util.Collections;
 import java.util.List;
 import java.util.UUID;
@@ -35,28 +34,24 @@
 import org.apache.accumulo.core.client.impl.Credentials;
 import org.apache.accumulo.core.client.impl.InstanceOperationsImpl;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.conf.SiteConfiguration;
 import org.apache.accumulo.core.metadata.RootTable;
-import org.apache.accumulo.core.util.ByteBufferUtil;
 import org.apache.accumulo.core.util.OpTimer;
-import org.apache.accumulo.core.util.TextUtil;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.accumulo.fate.zookeeper.ZooCacheFactory;
 import org.apache.accumulo.server.Accumulo;
-import org.apache.accumulo.server.conf.ServerConfigurationFactory;
 import org.apache.accumulo.server.fs.VolumeManager;
 import org.apache.accumulo.server.fs.VolumeManagerImpl;
 import org.apache.accumulo.server.zookeeper.ZooLock;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
-import com.google.common.base.Joiner;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import com.google.common.base.Joiner;
+
 /**
  * An implementation of Instance that looks in HDFS and ZooKeeper to find the master and root tablet location.
  *
@@ -177,38 +172,6 @@
     return new ConnectorImpl(new ClientContext(this, new Credentials(principal, token), SiteConfiguration.getInstance()));
   }
 
-  @Deprecated
-  @Override
-  public Connector getConnector(String user, byte[] pass) throws AccumuloException, AccumuloSecurityException {
-    return getConnector(user, new PasswordToken(pass));
-  }
-
-  @Deprecated
-  @Override
-  public Connector getConnector(String user, ByteBuffer pass) throws AccumuloException, AccumuloSecurityException {
-    return getConnector(user, ByteBufferUtil.toBytes(pass));
-  }
-
-  @Deprecated
-  @Override
-  public Connector getConnector(String user, CharSequence pass) throws AccumuloException, AccumuloSecurityException {
-    return getConnector(user, TextUtil.getBytes(new Text(pass.toString())));
-  }
-
-  private AccumuloConfiguration conf = null;
-
-  @Deprecated
-  @Override
-  public AccumuloConfiguration getConfiguration() {
-    return conf = conf == null ? new ServerConfigurationFactory(this).getConfiguration() : conf;
-  }
-
-  @Override
-  @Deprecated
-  public void setConfiguration(AccumuloConfiguration conf) {
-    this.conf = conf;
-  }
-
   public static void main(String[] args) {
     Instance instance = HdfsZooInstance.getInstance();
     System.out.println("Instance Name: " + instance.getInstanceName());
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfiguration.java b/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfiguration.java
index 1ca083e..20ad8c9 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfiguration.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfiguration.java
@@ -17,6 +17,7 @@
 package org.apache.accumulo.server.conf;
 
 import java.util.Map;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.Instance;
@@ -33,8 +34,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Predicate;
-
 public class NamespaceConfiguration extends ObservableConfiguration {
   private static final Logger log = LoggerFactory.getLogger(NamespaceConfiguration.class);
 
@@ -110,10 +109,10 @@
     }
 
     @Override
-    public boolean apply(String key) {
+    public boolean test(String key) {
       if (isIteratorOrConstraint(key))
         return false;
-      return userFilter.apply(key);
+      return userFilter.test(key);
     }
 
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfiguration.java b/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfiguration.java
index 6c53c5b..f040701 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfiguration.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfiguration.java
@@ -17,6 +17,7 @@
 package org.apache.accumulo.server.conf;
 
 import java.util.Map;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.Instance;
@@ -30,8 +31,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Predicate;
-
 public class TableConfiguration extends ObservableConfiguration {
   private static final Logger log = LoggerFactory.getLogger(TableConfiguration.class);
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/ZooCachePropertyAccessor.java b/server/base/src/main/java/org/apache/accumulo/server/conf/ZooCachePropertyAccessor.java
index 79f9a59..68b1847 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/ZooCachePropertyAccessor.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/conf/ZooCachePropertyAccessor.java
@@ -20,6 +20,7 @@
 
 import java.util.List;
 import java.util.Map;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
@@ -27,8 +28,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Predicate;
-
 /**
  * A helper object for accessing properties in a {@link ZooCache}.
  */
@@ -143,7 +142,7 @@
     List<String> children = propCache.getChildren(path);
     if (children != null) {
       for (String child : children) {
-        if (child != null && filter.apply(child)) {
+        if (child != null && filter.test(child)) {
           String value = get(path + "/" + child);
           if (value != null) {
             props.put(child, value);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/ZooConfiguration.java b/server/base/src/main/java/org/apache/accumulo/server/conf/ZooConfiguration.java
index f178ff1..51f713c 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/ZooConfiguration.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/conf/ZooConfiguration.java
@@ -22,6 +22,7 @@
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
@@ -31,8 +32,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Predicate;
-
 public class ZooConfiguration extends AccumuloConfiguration {
   private static final Logger log = LoggerFactory.getLogger(ZooConfiguration.class);
 
@@ -112,7 +111,7 @@
     List<String> children = propCache.getChildren(ZooUtil.getRoot(instanceId) + Constants.ZCONFIG);
     if (children != null) {
       for (String child : children) {
-        if (child != null && filter.apply(child)) {
+        if (child != null && filter.test(child)) {
           String value = getRaw(child);
           if (value != null)
             props.put(child, value);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/fs/PreferredVolumeChooser.java b/server/base/src/main/java/org/apache/accumulo/server/fs/PreferredVolumeChooser.java
index ec7c360..4aca493 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/fs/PreferredVolumeChooser.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/fs/PreferredVolumeChooser.java
@@ -25,6 +25,7 @@
 import java.util.HashSet;
 import java.util.Map;
 import java.util.Set;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.volume.Volume;
@@ -36,8 +37,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Predicate;
-
 /**
  * A {@link RandomVolumeChooser} that limits its choices from a given set of options to the subset of those options preferred for a particular table. Defaults
  * to selecting from all of the options presented. Can be customized via the table property {@value #PREFERRED_VOLUMES_CUSTOM_KEY}, which should contain a comma
@@ -51,12 +50,7 @@
    */
   public static final String PREFERRED_VOLUMES_CUSTOM_KEY = "table.custom.preferredVolumes";
   // TODO ACCUMULO-3417 replace this with the ability to retrieve by String key.
-  private static final Predicate<String> PREFERRED_VOLUMES_FILTER = new Predicate<String>() {
-    @Override
-    public boolean apply(String key) {
-      return PREFERRED_VOLUMES_CUSTOM_KEY.equals(key);
-    }
-  };
+  private static final Predicate<String> PREFERRED_VOLUMES_FILTER = key -> PREFERRED_VOLUMES_CUSTOM_KEY.equals(key);
 
   @SuppressWarnings("unchecked")
   private final Map<String,Set<String>> parsedPreferredVolumes = Collections.synchronizedMap(new LRUMap(1000));
diff --git a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeChooserEnvironment.java b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeChooserEnvironment.java
index b6d27cb..a944791 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeChooserEnvironment.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeChooserEnvironment.java
@@ -16,7 +16,7 @@
  */
 package org.apache.accumulo.server.fs;
 
-import com.google.common.base.Optional;
+import java.util.Optional;
 
 public class VolumeChooserEnvironment {
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManager.java b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManager.java
index e761e4f..09436a5 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManager.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManager.java
@@ -18,6 +18,7 @@
 
 import java.io.IOException;
 import java.util.Collection;
+import java.util.Optional;
 
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.volume.Volume;
@@ -28,8 +29,6 @@
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
 
-import com.google.common.base.Optional;
-
 /**
  * A wrapper around multiple hadoop FileSystem objects, which are assumed to be different volumes. This also concentrates a bunch of meta-operations like
  * waiting for SAFE_MODE, and closing WALs. N.B. implementations must be thread safe.
diff --git a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java
index d9df424..60c2ece 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java
@@ -27,6 +27,7 @@
 import java.util.HashSet;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.Optional;
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.DefaultConfiguration;
@@ -57,7 +58,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Optional;
 import com.google.common.collect.HashMultimap;
 import com.google.common.collect.Multimap;
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java
index 192ae77..9414909 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java
@@ -23,6 +23,7 @@
 import java.util.HashSet;
 import java.util.List;
 import java.util.Map.Entry;
+import java.util.Optional;
 import java.util.SortedMap;
 import java.util.TreeMap;
 
@@ -49,12 +50,9 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Optional;
-
 /**
  * Utility methods for managing absolute URIs contained in Accumulo metadata.
  */
-
 public class VolumeUtil {
 
   private static final Logger log = LoggerFactory.getLogger(VolumeUtil.class);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java b/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java
index 0ccf51f..0758091 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java
@@ -30,6 +30,7 @@
 import java.util.HashSet;
 import java.util.Locale;
 import java.util.Map.Entry;
+import java.util.Optional;
 import java.util.Set;
 import java.util.TreeMap;
 import java.util.UUID;
@@ -113,7 +114,6 @@
 import com.beust.jcommander.Parameter;
 import com.google.auto.service.AutoService;
 import com.google.common.base.Joiner;
-import com.google.common.base.Optional;
 
 import jline.console.ConsoleReader;
 
@@ -320,8 +320,8 @@
     UUID uuid = UUID.randomUUID();
     // the actual disk locations of the root table and tablets
     String[] configuredVolumes = VolumeConfiguration.getVolumeUris(SiteConfiguration.getInstance());
-    final String rootTabletDir = new Path(fs.choose(Optional.<String> absent(), configuredVolumes) + Path.SEPARATOR + ServerConstants.TABLE_DIR
-        + Path.SEPARATOR + RootTable.ID + RootTable.ROOT_TABLET_LOCATION).toString();
+    final String rootTabletDir = new Path(fs.choose(Optional.<String> empty(), configuredVolumes) + Path.SEPARATOR + ServerConstants.TABLE_DIR + Path.SEPARATOR
+        + RootTable.ID + RootTable.ROOT_TABLET_LOCATION).toString();
 
     try {
       initZooKeeper(opts, uuid.toString(), instanceNamePath, rootTabletDir);
@@ -419,11 +419,11 @@
     // initialize initial system tables config in zookeeper
     initSystemTablesConfig();
 
-    String tableMetadataTabletDir = fs.choose(Optional.<String> absent(), ServerConstants.getBaseUris()) + Constants.HDFS_TABLES_DIR + Path.SEPARATOR
+    String tableMetadataTabletDir = fs.choose(Optional.<String> empty(), ServerConstants.getBaseUris()) + Constants.HDFS_TABLES_DIR + Path.SEPARATOR
         + MetadataTable.ID + TABLE_TABLETS_TABLET_DIR;
-    String replicationTableDefaultTabletDir = fs.choose(Optional.<String> absent(), ServerConstants.getBaseUris()) + Constants.HDFS_TABLES_DIR + Path.SEPARATOR
+    String replicationTableDefaultTabletDir = fs.choose(Optional.<String> empty(), ServerConstants.getBaseUris()) + Constants.HDFS_TABLES_DIR + Path.SEPARATOR
         + ReplicationTable.ID + Constants.DEFAULT_TABLET_LOCATION;
-    String defaultMetadataTabletDir = fs.choose(Optional.<String> absent(), ServerConstants.getBaseUris()) + Constants.HDFS_TABLES_DIR + Path.SEPARATOR
+    String defaultMetadataTabletDir = fs.choose(Optional.<String> empty(), ServerConstants.getBaseUris()) + Constants.HDFS_TABLES_DIR + Path.SEPARATOR
         + MetadataTable.ID + Constants.DEFAULT_TABLET_LOCATION;
 
     // create table and default tablets directories
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
index 1838536..f412658 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
@@ -31,6 +31,7 @@
 import java.util.Objects;
 import java.util.Set;
 import java.util.SortedMap;
+import java.util.function.Function;
 
 import org.apache.accumulo.core.client.IsolatedScanner;
 import org.apache.accumulo.core.client.RowIterator;
@@ -49,7 +50,6 @@
 import org.apache.accumulo.server.master.state.TabletMigration;
 import org.apache.commons.lang.mutable.MutableInt;
 
-import com.google.common.base.Function;
 import com.google.common.collect.HashBasedTable;
 import com.google.common.collect.HashMultimap;
 import com.google.common.collect.Iterators;
@@ -778,7 +778,8 @@
 
         RowIterator rowIter = new RowIterator(scanner);
 
-        return Iterators.transform(rowIter, new LocationFunction());
+        Function<Iterator<Entry<Key,Value>>,Pair<KeyExtent,Location>> f = new LocationFunction();
+        return Iterators.transform(rowIter, x -> f.apply(x));
       } catch (Exception e) {
         throw new RuntimeException(e);
       }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/RegexGroupBalancer.java b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/RegexGroupBalancer.java
index 0d07a77..8197028 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/RegexGroupBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/RegexGroupBalancer.java
@@ -18,6 +18,7 @@
 package org.apache.accumulo.server.master.balancer;
 
 import java.util.Map;
+import java.util.function.Function;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
@@ -26,8 +27,6 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.hadoop.io.Text;
 
-import com.google.common.base.Function;
-
 /**
  * A {@link GroupBalancer} that groups tablets using a configurable regex. To use this balancer configure the following settings for your table then configure
  * this balancer for your table.
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
index 00f86c6..c0dca50 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
@@ -16,11 +16,10 @@
  */
 package org.apache.accumulo.server.master.state;
 
-import static java.nio.charset.StandardCharsets.UTF_8;
-
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Arrays;
+import java.util.Base64;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.HashMap;
@@ -38,7 +37,6 @@
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.master.thrift.MasterState;
 import org.apache.accumulo.core.util.AddressUtil;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.server.master.state.TabletLocationState.BadLocationStateException;
 import org.apache.hadoop.io.DataInputBuffer;
 import org.apache.hadoop.io.DataOutputBuffer;
@@ -92,7 +90,7 @@
     try {
       Set<KeyExtent> result = new HashSet<>();
       DataInputBuffer buffer = new DataInputBuffer();
-      byte[] data = Base64.decodeBase64(migrations.getBytes(UTF_8));
+      byte[] data = Base64.getDecoder().decode(migrations);
       buffer.reset(data, data.length);
       while (buffer.available() > 0) {
         KeyExtent extent = new KeyExtent();
@@ -138,7 +136,7 @@
     try {
       Map<String,MergeInfo> result = new HashMap<>();
       DataInputBuffer buffer = new DataInputBuffer();
-      byte[] data = Base64.decodeBase64(merges.getBytes(UTF_8));
+      byte[] data = Base64.getDecoder().decode(merges);
       buffer.reset(data, data.length);
       while (buffer.available() > 0) {
         MergeInfo mergeInfo = new MergeInfo();
@@ -240,7 +238,7 @@
     } catch (Exception ex) {
       throw new RuntimeException(ex);
     }
-    String encoded = Base64.encodeBase64String(Arrays.copyOf(buffer.getData(), buffer.getLength()));
+    String encoded = Base64.getEncoder().encodeToString(Arrays.copyOf(buffer.getData(), buffer.getLength()));
     cfg.addOption(MERGES_OPTION, encoded);
   }
 
@@ -253,7 +251,7 @@
     } catch (Exception ex) {
       throw new RuntimeException(ex);
     }
-    String encoded = Base64.encodeBase64String(Arrays.copyOf(buffer.getData(), buffer.getLength()));
+    String encoded = Base64.getEncoder().encodeToString(Arrays.copyOf(buffer.getData(), buffer.getLength()));
     cfg.addOption(MIGRATIONS_OPTION, encoded);
   }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java b/server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
index 9473aca..161e15e 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
@@ -24,6 +24,7 @@
 import java.security.MessageDigest;
 import java.security.NoSuchAlgorithmException;
 import java.security.SecurityPermission;
+import java.util.Base64;
 import java.util.Map.Entry;
 
 import org.apache.accumulo.core.Constants;
@@ -35,7 +36,6 @@
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.conf.SiteConfiguration;
 import org.apache.accumulo.core.security.thrift.TCredentials;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.server.ServerConstants;
 import org.apache.hadoop.io.Writable;
 
@@ -137,7 +137,7 @@
         // this is impossible with ByteArrayOutputStream; crash hard if this happens
         throw new RuntimeException(e);
       }
-      return new SystemToken(Base64.encodeBase64(bytes.toByteArray()));
+      return new SystemToken(Base64.getEncoder().encode(bytes.toByteArray()));
     }
   }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java b/server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java
index 97bc858..9e8f576 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java
@@ -22,14 +22,11 @@
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.Map;
-import java.util.Map.Entry;
 import java.util.Set;
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.commons.lang.StringUtils;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
 /**
  * When SASL is enabled, this parses properties from the site configuration to build up a set of all users capable of impersonating another user, the users
@@ -44,9 +41,8 @@
  */
 public class UserImpersonation {
 
-  private static final Logger log = LoggerFactory.getLogger(UserImpersonation.class);
   private static final Set<String> ALWAYS_TRUE = new AlwaysTrueSet<>();
-  private static final String ALL = "*", USERS = "users", HOSTS = "hosts";
+  private static final String ALL = "*";
 
   public static class AlwaysTrueSet<T> implements Set<T> {
 
@@ -173,7 +169,6 @@
 
   private final Map<String,UsersWithHosts> proxyUsers;
 
-  @SuppressWarnings("deprecation")
   public UserImpersonation(AccumuloConfiguration conf) {
     proxyUsers = new HashMap<>();
 
@@ -182,9 +177,6 @@
     if (!Property.INSTANCE_RPC_SASL_ALLOWED_USER_IMPERSONATION.getDefaultValue().equals(userConfig)) {
       String hostConfig = conf.get(Property.INSTANCE_RPC_SASL_ALLOWED_HOST_IMPERSONATION);
       parseOnelineConfiguration(userConfig, hostConfig);
-    } else {
-      // Otherwise, assume the old-style
-      parseMultiPropertyConfiguration(conf.getAllPropertiesWithPrefix(Property.INSTANCE_RPC_SASL_PROXYUSERS));
     }
   }
 
@@ -252,64 +244,6 @@
     }
   }
 
-  /**
-   * Parses all properties that start with {@link Property#INSTANCE_RPC_SASL_PROXYUSERS}. This approach was the original configuration method, but does not work
-   * with Ambari.
-   *
-   * @param configProperties
-   *          The relevant configuration properties for impersonation.
-   */
-  @SuppressWarnings("javadoc")
-  private void parseMultiPropertyConfiguration(Map<String,String> configProperties) {
-    @SuppressWarnings("deprecation")
-    final String configKey = Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey();
-    for (Entry<String,String> entry : configProperties.entrySet()) {
-      String aclKey = entry.getKey().substring(configKey.length());
-      int index = aclKey.lastIndexOf('.');
-
-      if (-1 == index) {
-        throw new RuntimeException("Expected 2 elements in key suffix: " + aclKey);
-      }
-
-      final String remoteUser = aclKey.substring(0, index).trim(), usersOrHosts = aclKey.substring(index + 1).trim();
-      UsersWithHosts usersWithHosts = proxyUsers.get(remoteUser);
-      if (null == usersWithHosts) {
-        usersWithHosts = new UsersWithHosts();
-        proxyUsers.put(remoteUser, usersWithHosts);
-      }
-
-      if (USERS.equals(usersOrHosts)) {
-        String userString = entry.getValue().trim();
-        if (ALL.equals(userString)) {
-          usersWithHosts.setAcceptAllUsers(true);
-        } else if (!usersWithHosts.acceptsAllUsers()) {
-          Set<String> users = usersWithHosts.getUsers();
-          if (null == users) {
-            users = new HashSet<>();
-            usersWithHosts.setUsers(users);
-          }
-          String[] userValues = StringUtils.split(userString, ',');
-          users.addAll(Arrays.<String> asList(userValues));
-        }
-      } else if (HOSTS.equals(usersOrHosts)) {
-        String hostsString = entry.getValue().trim();
-        if (ALL.equals(hostsString)) {
-          usersWithHosts.setAcceptAllHosts(true);
-        } else if (!usersWithHosts.acceptsAllHosts()) {
-          Set<String> hosts = usersWithHosts.getHosts();
-          if (null == hosts) {
-            hosts = new HashSet<>();
-            usersWithHosts.setHosts(hosts);
-          }
-          String[] hostValues = StringUtils.split(hostsString, ',');
-          hosts.addAll(Arrays.<String> asList(hostValues));
-        }
-      } else {
-        log.debug("Ignoring key " + aclKey);
-      }
-    }
-  }
-
   public UsersWithHosts get(String remoteUser) {
     return proxyUsers.get(remoteUser);
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/handler/KerberosAuthenticator.java b/server/base/src/main/java/org/apache/accumulo/server/security/handler/KerberosAuthenticator.java
index 018c901..504f291 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/handler/KerberosAuthenticator.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/handler/KerberosAuthenticator.java
@@ -19,6 +19,7 @@
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.util.Arrays;
+import java.util.Base64;
 import java.util.HashSet;
 import java.util.Set;
 
@@ -32,7 +33,6 @@
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.SiteConfiguration;
 import org.apache.accumulo.core.security.thrift.TCredentials;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeMissingPolicy;
@@ -106,7 +106,7 @@
         zoo.putPersistentData(zkUserPath, principalData, NodeExistsPolicy.FAIL);
 
         // Create the root user in ZK using base64 encoded name (since the name is included in the znode)
-        createUserNodeInZk(Base64.encodeBase64String(principalData));
+        createUserNodeInZk(Base64.getEncoder().encodeToString(principalData));
       }
     } catch (KeeperException | InterruptedException e) {
       log.error("Failed to initialize security", e);
@@ -144,7 +144,7 @@
     Set<String> base64Users = zkAuthenticator.listUsers();
     Set<String> readableUsers = new HashSet<>();
     for (String base64User : base64Users) {
-      readableUsers.add(new String(Base64.decodeBase64(base64User), UTF_8));
+      readableUsers.add(new String(Base64.getDecoder().decode(base64User), UTF_8));
     }
     return readableUsers;
   }
@@ -156,7 +156,7 @@
     }
 
     try {
-      createUserNodeInZk(Base64.encodeBase64String(principal.getBytes(UTF_8)));
+      createUserNodeInZk(Base64.getEncoder().encodeToString(principal.getBytes(UTF_8)));
     } catch (KeeperException e) {
       if (e.code().equals(KeeperException.Code.NODEEXISTS)) {
         throw new AccumuloSecurityException(principal, SecurityErrorCode.USER_EXISTS, e);
@@ -171,7 +171,7 @@
 
   @Override
   public synchronized void dropUser(String user) throws AccumuloSecurityException {
-    final String encodedUser = Base64.encodeBase64String(user.getBytes(UTF_8));
+    final String encodedUser = Base64.getEncoder().encodeToString(user.getBytes(UTF_8));
     try {
       zkAuthenticator.dropUser(encodedUser);
     } catch (AccumuloSecurityException e) {
@@ -186,7 +186,7 @@
 
   @Override
   public synchronized boolean userExists(String user) throws AccumuloSecurityException {
-    user = Base64.encodeBase64String(user.getBytes(UTF_8));
+    user = Base64.getEncoder().encodeToString(user.getBytes(UTF_8));
     return zkAuthenticator.userExists(user);
   }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/handler/KerberosAuthorizor.java b/server/base/src/main/java/org/apache/accumulo/server/security/handler/KerberosAuthorizor.java
index bd48440..fe7e2e3 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/handler/KerberosAuthorizor.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/handler/KerberosAuthorizor.java
@@ -19,13 +19,13 @@
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.nio.ByteBuffer;
+import java.util.Base64;
 import java.util.List;
 
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.thrift.TCredentials;
-import org.apache.accumulo.core.util.Base64;
 
 /**
  * Kerberos principals might contains identifiers that are not valid ZNodes ('/'). Base64-encodes the principals before interacting with ZooKeeper.
@@ -50,32 +50,32 @@
 
   @Override
   public void initializeSecurity(TCredentials credentials, String rootuser) throws AccumuloSecurityException, ThriftSecurityException {
-    zkAuthorizor.initializeSecurity(credentials, Base64.encodeBase64String(rootuser.getBytes(UTF_8)));
+    zkAuthorizor.initializeSecurity(credentials, Base64.getEncoder().encodeToString(rootuser.getBytes(UTF_8)));
   }
 
   @Override
   public void changeAuthorizations(String user, Authorizations authorizations) throws AccumuloSecurityException {
-    zkAuthorizor.changeAuthorizations(Base64.encodeBase64String(user.getBytes(UTF_8)), authorizations);
+    zkAuthorizor.changeAuthorizations(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)), authorizations);
   }
 
   @Override
   public Authorizations getCachedUserAuthorizations(String user) throws AccumuloSecurityException {
-    return zkAuthorizor.getCachedUserAuthorizations(Base64.encodeBase64String(user.getBytes(UTF_8)));
+    return zkAuthorizor.getCachedUserAuthorizations(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)));
   }
 
   @Override
   public boolean isValidAuthorizations(String user, List<ByteBuffer> list) throws AccumuloSecurityException {
-    return zkAuthorizor.isValidAuthorizations(Base64.encodeBase64String(user.getBytes(UTF_8)), list);
+    return zkAuthorizor.isValidAuthorizations(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)), list);
   }
 
   @Override
   public void initUser(String user) throws AccumuloSecurityException {
-    zkAuthorizor.initUser(Base64.encodeBase64String(user.getBytes(UTF_8)));
+    zkAuthorizor.initUser(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)));
   }
 
   @Override
   public void dropUser(String user) throws AccumuloSecurityException {
-    user = Base64.encodeBase64String(user.getBytes(UTF_8));
+    user = Base64.getEncoder().encodeToString(user.getBytes(UTF_8));
     zkAuthorizor.dropUser(user);
   }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/handler/KerberosPermissionHandler.java b/server/base/src/main/java/org/apache/accumulo/server/security/handler/KerberosPermissionHandler.java
index 7de48a6..777b05d 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/handler/KerberosPermissionHandler.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/handler/KerberosPermissionHandler.java
@@ -18,6 +18,8 @@
 
 import static java.nio.charset.StandardCharsets.UTF_8;
 
+import java.util.Base64;
+
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.NamespaceNotFoundException;
 import org.apache.accumulo.core.client.TableNotFoundException;
@@ -26,7 +28,6 @@
 import org.apache.accumulo.core.security.SystemPermission;
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.core.security.thrift.TCredentials;
-import org.apache.accumulo.core.util.Base64;
 
 /**
  * Kerberos principals might contains identifiers that are not valid ZNodes ('/'). Base64-encodes the principals before interacting with ZooKeeper.
@@ -51,71 +52,71 @@
 
   @Override
   public void initializeSecurity(TCredentials credentials, String rootuser) throws AccumuloSecurityException, ThriftSecurityException {
-    zkPermissionHandler.initializeSecurity(credentials, Base64.encodeBase64String(rootuser.getBytes(UTF_8)));
+    zkPermissionHandler.initializeSecurity(credentials, Base64.getEncoder().encodeToString(rootuser.getBytes(UTF_8)));
   }
 
   @Override
   public boolean hasSystemPermission(String user, SystemPermission permission) throws AccumuloSecurityException {
-    return zkPermissionHandler.hasSystemPermission(Base64.encodeBase64String(user.getBytes(UTF_8)), permission);
+    return zkPermissionHandler.hasSystemPermission(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)), permission);
   }
 
   @Override
   public boolean hasCachedSystemPermission(String user, SystemPermission permission) throws AccumuloSecurityException {
-    return zkPermissionHandler.hasCachedSystemPermission(Base64.encodeBase64String(user.getBytes(UTF_8)), permission);
+    return zkPermissionHandler.hasCachedSystemPermission(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)), permission);
   }
 
   @Override
   public boolean hasTablePermission(String user, String table, TablePermission permission) throws AccumuloSecurityException, TableNotFoundException {
-    return zkPermissionHandler.hasTablePermission(Base64.encodeBase64String(user.getBytes(UTF_8)), table, permission);
+    return zkPermissionHandler.hasTablePermission(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)), table, permission);
   }
 
   @Override
   public boolean hasCachedTablePermission(String user, String table, TablePermission permission) throws AccumuloSecurityException, TableNotFoundException {
-    return zkPermissionHandler.hasCachedTablePermission(Base64.encodeBase64String(user.getBytes(UTF_8)), table, permission);
+    return zkPermissionHandler.hasCachedTablePermission(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)), table, permission);
   }
 
   @Override
   public boolean hasNamespacePermission(String user, String namespace, NamespacePermission permission) throws AccumuloSecurityException,
       NamespaceNotFoundException {
-    return zkPermissionHandler.hasNamespacePermission(Base64.encodeBase64String(user.getBytes(UTF_8)), namespace, permission);
+    return zkPermissionHandler.hasNamespacePermission(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)), namespace, permission);
   }
 
   @Override
   public boolean hasCachedNamespacePermission(String user, String namespace, NamespacePermission permission) throws AccumuloSecurityException,
       NamespaceNotFoundException {
-    return zkPermissionHandler.hasCachedNamespacePermission(Base64.encodeBase64String(user.getBytes(UTF_8)), namespace, permission);
+    return zkPermissionHandler.hasCachedNamespacePermission(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)), namespace, permission);
   }
 
   @Override
   public void grantSystemPermission(String user, SystemPermission permission) throws AccumuloSecurityException {
-    zkPermissionHandler.grantSystemPermission(Base64.encodeBase64String(user.getBytes(UTF_8)), permission);
+    zkPermissionHandler.grantSystemPermission(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)), permission);
   }
 
   @Override
   public void revokeSystemPermission(String user, SystemPermission permission) throws AccumuloSecurityException {
-    zkPermissionHandler.revokeSystemPermission(Base64.encodeBase64String(user.getBytes(UTF_8)), permission);
+    zkPermissionHandler.revokeSystemPermission(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)), permission);
   }
 
   @Override
   public void grantTablePermission(String user, String table, TablePermission permission) throws AccumuloSecurityException, TableNotFoundException {
-    zkPermissionHandler.grantTablePermission(Base64.encodeBase64String(user.getBytes(UTF_8)), table, permission);
+    zkPermissionHandler.grantTablePermission(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)), table, permission);
   }
 
   @Override
   public void revokeTablePermission(String user, String table, TablePermission permission) throws AccumuloSecurityException, TableNotFoundException {
-    zkPermissionHandler.revokeTablePermission(Base64.encodeBase64String(user.getBytes(UTF_8)), table, permission);
+    zkPermissionHandler.revokeTablePermission(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)), table, permission);
   }
 
   @Override
   public void grantNamespacePermission(String user, String namespace, NamespacePermission permission) throws AccumuloSecurityException,
       NamespaceNotFoundException {
-    zkPermissionHandler.grantNamespacePermission(Base64.encodeBase64String(user.getBytes(UTF_8)), namespace, permission);
+    zkPermissionHandler.grantNamespacePermission(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)), namespace, permission);
   }
 
   @Override
   public void revokeNamespacePermission(String user, String namespace, NamespacePermission permission) throws AccumuloSecurityException,
       NamespaceNotFoundException {
-    zkPermissionHandler.revokeNamespacePermission(Base64.encodeBase64String(user.getBytes(UTF_8)), namespace, permission);
+    zkPermissionHandler.revokeNamespacePermission(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)), namespace, permission);
   }
 
   @Override
@@ -130,7 +131,7 @@
 
   @Override
   public void initUser(String user) throws AccumuloSecurityException {
-    zkPermissionHandler.initUser(Base64.encodeBase64String(user.getBytes(UTF_8)));
+    zkPermissionHandler.initUser(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)));
   }
 
   @Override
@@ -140,7 +141,7 @@
 
   @Override
   public void cleanUser(String user) throws AccumuloSecurityException {
-    zkPermissionHandler.cleanUser(Base64.encodeBase64String(user.getBytes(UTF_8)));
+    zkPermissionHandler.cleanUser(Base64.getEncoder().encodeToString(user.getBytes(UTF_8)));
   }
 
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/DumpZookeeper.java b/server/base/src/main/java/org/apache/accumulo/server/util/DumpZookeeper.java
index e900202..5f6f704 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/DumpZookeeper.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/DumpZookeeper.java
@@ -20,9 +20,9 @@
 
 import java.io.PrintStream;
 import java.io.UnsupportedEncodingException;
+import java.util.Base64;
 
 import org.apache.accumulo.core.cli.Help;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
 import org.apache.log4j.Level;
@@ -109,7 +109,7 @@
     for (int i = 0; i < data.length; i++) {
       // does this look like simple ascii?
       if (data[i] < ' ' || data[i] > '~')
-        return new Encoded("base64", Base64.encodeBase64String(data));
+        return new Encoded("base64", Base64.getEncoder().encodeToString(data));
     }
     return new Encoded(UTF_8.name(), new String(data, UTF_8));
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/FileUtil.java b/server/base/src/main/java/org/apache/accumulo/server/util/FileUtil.java
index a686bae..9205683 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/FileUtil.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/FileUtil.java
@@ -24,6 +24,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.Optional;
 import java.util.Random;
 import java.util.Set;
 import java.util.SortedMap;
@@ -57,8 +58,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Optional;
-
 public class FileUtil {
 
   public static class FileInfo {
@@ -82,7 +81,7 @@
   private static final Logger log = LoggerFactory.getLogger(FileUtil.class);
 
   private static Path createTmpDir(AccumuloConfiguration acuConf, VolumeManager fs) throws IOException {
-    String accumuloDir = fs.choose(Optional.<String> absent(), ServerConstants.getBaseUris());
+    String accumuloDir = fs.choose(Optional.<String> empty(), ServerConstants.getBaseUris());
 
     Path result = null;
     while (result == null) {
@@ -230,7 +229,7 @@
         return .5;
       }
 
-      List<SortedKeyValueIterator<Key,Value>> iters = new ArrayList<SortedKeyValueIterator<Key,Value>>(readers);
+      List<SortedKeyValueIterator<Key,Value>> iters = new ArrayList<>(readers);
       MultiIterator mmfi = new MultiIterator(iters, true);
 
       // skip the prevendrow
@@ -311,7 +310,7 @@
         throw new IOException("Failed to find mid point, no entries between " + prevEndRow + " and " + endRow + " for " + mapFiles);
       }
 
-      List<SortedKeyValueIterator<Key,Value>> iters = new ArrayList<SortedKeyValueIterator<Key,Value>>(readers);
+      List<SortedKeyValueIterator<Key,Value>> iters = new ArrayList<>(readers);
       MultiIterator mmfi = new MultiIterator(iters, true);
 
       // skip the prevendrow
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/MetadataTableUtil.java b/server/base/src/main/java/org/apache/accumulo/server/util/MetadataTableUtil.java
index b38083f..872febc 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/MetadataTableUtil.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/MetadataTableUtil.java
@@ -31,6 +31,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.Optional;
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
@@ -96,7 +97,6 @@
 import org.slf4j.LoggerFactory;
 
 import com.google.common.annotations.VisibleForTesting;
-import com.google.common.base.Optional;
 
 /**
  * provides a reference to the metadata table for updates by tablet servers
@@ -543,7 +543,7 @@
       }
     }
 
-    return new Pair<List<LogEntry>,SortedMap<FileRef,DataFileValue>>(result, sizes);
+    return new Pair<>(result, sizes);
   }
 
   public static List<LogEntry> getLogEntries(ClientContext context, KeyExtent extent) throws IOException, KeeperException, InterruptedException {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/RandomizeVolumes.java b/server/base/src/main/java/org/apache/accumulo/server/util/RandomizeVolumes.java
index 907dadd..a92087c 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/RandomizeVolumes.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/RandomizeVolumes.java
@@ -21,6 +21,7 @@
 
 import java.io.IOException;
 import java.util.Map.Entry;
+import java.util.Optional;
 import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.cli.ClientOnRequiredTable;
@@ -48,8 +49,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Optional;
-
 public class RandomizeVolumes {
   private static final Logger log = LoggerFactory.getLogger(RandomizeVolumes.class);
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java b/server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java
index 8da1ce9..5599100 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java
@@ -20,13 +20,13 @@
 
 import java.io.FileInputStream;
 import java.io.InputStream;
+import java.util.Base64;
 import java.util.Stack;
 
 import javax.xml.parsers.SAXParser;
 import javax.xml.parsers.SAXParserFactory;
 
 import org.apache.accumulo.core.cli.Help;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
 import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
@@ -85,7 +85,7 @@
     private void create(String path, String value, String encoding) {
       byte[] data = value.getBytes(UTF_8);
       if ("base64".equals(encoding))
-        data = Base64.decodeBase64(data);
+        data = Base64.getDecoder().decode(data);
       try {
         try {
           zk.putPersistentData(path, data, overwrite ? NodeExistsPolicy.OVERWRITE : NodeExistsPolicy.FAIL);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/conf/NamespaceConfigurationTest.java b/server/base/src/test/java/org/apache/accumulo/server/conf/NamespaceConfigurationTest.java
index 071e9c0..c0e4bc6 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/conf/NamespaceConfigurationTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/conf/NamespaceConfigurationTest.java
@@ -32,6 +32,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.UUID;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.Instance;
@@ -45,9 +46,6 @@
 import org.junit.Before;
 import org.junit.Test;
 
-import com.google.common.base.Predicate;
-import com.google.common.base.Predicates;
-
 public class NamespaceConfigurationTest {
   private static final String NSID = "namespace";
   private static final String ZOOKEEPERS = "localhost";
@@ -117,7 +115,7 @@
 
   @Test
   public void testGetProperties() {
-    Predicate<String> all = Predicates.alwaysTrue();
+    Predicate<String> all = x -> true;
     Map<String,String> props = new java.util.HashMap<>();
     parent.getProperties(props, all);
     replay(parent);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/conf/TableConfigurationTest.java b/server/base/src/test/java/org/apache/accumulo/server/conf/TableConfigurationTest.java
index 34d6905..53a56c6 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/conf/TableConfigurationTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/conf/TableConfigurationTest.java
@@ -30,6 +30,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.UUID;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.Instance;
@@ -41,9 +42,6 @@
 import org.junit.Before;
 import org.junit.Test;
 
-import com.google.common.base.Predicate;
-import com.google.common.base.Predicates;
-
 public class TableConfigurationTest {
   private static final String TID = "table";
   private static final String ZOOKEEPERS = "localhost";
@@ -101,7 +99,7 @@
 
   @Test
   public void testGetProperties() {
-    Predicate<String> all = Predicates.alwaysTrue();
+    Predicate<String> all = x -> true;
     Map<String,String> props = new java.util.HashMap<>();
     parent.getProperties(props, all);
     replay(parent);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/conf/ZooCachePropertyAccessorTest.java b/server/base/src/test/java/org/apache/accumulo/server/conf/ZooCachePropertyAccessorTest.java
index 9bd7b90..7a7292d 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/conf/ZooCachePropertyAccessorTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/conf/ZooCachePropertyAccessorTest.java
@@ -27,6 +27,7 @@
 
 import java.util.List;
 import java.util.Map;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
@@ -34,8 +35,6 @@
 import org.junit.Before;
 import org.junit.Test;
 
-import com.google.common.base.Predicate;
-
 public class ZooCachePropertyAccessorTest {
   private static final String PATH = "/root/path/to/props";
   private static final Property PROP = Property.INSTANCE_SECRET;
@@ -120,8 +119,8 @@
     expect(zc.get(PATH + "/" + child1)).andReturn(VALUE_BYTES);
     expect(zc.get(PATH + "/" + child2)).andReturn(null);
     replay(zc);
-    expect(filter.apply(child1)).andReturn(true);
-    expect(filter.apply(child2)).andReturn(true);
+    expect(filter.test(child1)).andReturn(true);
+    expect(filter.test(child2)).andReturn(true);
     replay(filter);
 
     a.getProperties(props, PATH, filter, parent, null);
@@ -158,7 +157,7 @@
     children.add(child1);
     expect(zc.getChildren(PATH)).andReturn(children);
     replay(zc);
-    expect(filter.apply(child1)).andReturn(false);
+    expect(filter.test(child1)).andReturn(false);
     replay(filter);
 
     a.getProperties(props, PATH, filter, parent, null);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/fs/VolumeManagerImplTest.java b/server/base/src/test/java/org/apache/accumulo/server/fs/VolumeManagerImplTest.java
index 3bf207a..8a8d8bf 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/fs/VolumeManagerImplTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/fs/VolumeManagerImplTest.java
@@ -18,6 +18,7 @@
 
 import java.util.Arrays;
 import java.util.List;
+import java.util.Optional;
 
 import org.apache.accumulo.core.conf.ConfigurationCopy;
 import org.apache.accumulo.core.conf.Property;
@@ -28,8 +29,6 @@
 import org.junit.Before;
 import org.junit.Test;
 
-import com.google.common.base.Optional;
-
 /**
  *
  */
diff --git a/server/base/src/test/java/org/apache/accumulo/server/init/InitializeTest.java b/server/base/src/test/java/org/apache/accumulo/server/init/InitializeTest.java
index cb34fb9..1f915c0 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/init/InitializeTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/init/InitializeTest.java
@@ -41,6 +41,11 @@
  * This test is not thread-safe.
  */
 public class InitializeTest {
+  @SuppressWarnings("deprecation")
+  private static Property INSTANCE_DFS_DIR = Property.INSTANCE_DFS_DIR;
+  @SuppressWarnings("deprecation")
+  private static Property INSTANCE_DFS_URI = Property.INSTANCE_DFS_URI;
+
   private Configuration conf;
   private VolumeManager fs;
   private SiteConfiguration sconf;
@@ -77,10 +82,9 @@
     assertTrue(Initialize.isInitialized(fs));
   }
 
-  @SuppressWarnings("deprecation")
   @Test
   public void testCheckInit_NoZK() throws Exception {
-    expect(sconf.get(Property.INSTANCE_DFS_URI)).andReturn("hdfs://foo");
+    expect(sconf.get(INSTANCE_DFS_URI)).andReturn("hdfs://foo");
     expectLastCall().anyTimes();
     expect(sconf.get(Property.INSTANCE_ZK_HOST)).andReturn("zk1");
     replay(sconf);
@@ -90,12 +94,11 @@
     assertFalse(Initialize.checkInit(conf, fs, sconf));
   }
 
-  @SuppressWarnings("deprecation")
   @Test
   public void testCheckInit_AlreadyInit() throws Exception {
-    expect(sconf.get(Property.INSTANCE_DFS_URI)).andReturn("hdfs://foo");
+    expect(sconf.get(INSTANCE_DFS_URI)).andReturn("hdfs://foo");
     expectLastCall().anyTimes();
-    expect(sconf.get(Property.INSTANCE_DFS_DIR)).andReturn("/bar");
+    expect(sconf.get(INSTANCE_DFS_DIR)).andReturn("/bar");
     expect(sconf.get(Property.INSTANCE_VOLUMES)).andReturn("");
     expect(sconf.get(Property.INSTANCE_ZK_HOST)).andReturn("zk1");
     expect(sconf.get(Property.INSTANCE_SECRET)).andReturn(Property.INSTANCE_SECRET.getDefaultValue());
@@ -109,13 +112,12 @@
   }
 
   // Cannot test, need to mock static FileSystem.getDefaultUri()
-  @SuppressWarnings("deprecation")
   @Ignore
   @Test
   public void testCheckInit_AlreadyInit_DefaultUri() throws Exception {
-    expect(sconf.get(Property.INSTANCE_DFS_URI)).andReturn("");
+    expect(sconf.get(INSTANCE_DFS_URI)).andReturn("");
     expectLastCall().anyTimes();
-    expect(sconf.get(Property.INSTANCE_DFS_DIR)).andReturn("/bar");
+    expect(sconf.get(INSTANCE_DFS_DIR)).andReturn("/bar");
     expect(sconf.get(Property.INSTANCE_ZK_HOST)).andReturn("zk1");
     expect(sconf.get(Property.INSTANCE_SECRET)).andReturn(Property.INSTANCE_SECRET.getDefaultValue());
     replay(sconf);
@@ -128,10 +130,9 @@
     assertFalse(Initialize.checkInit(conf, fs, sconf));
   }
 
-  @SuppressWarnings("deprecation")
   @Test(expected = IOException.class)
   public void testCheckInit_FSException() throws Exception {
-    expect(sconf.get(Property.INSTANCE_DFS_URI)).andReturn("hdfs://foo");
+    expect(sconf.get(INSTANCE_DFS_URI)).andReturn("hdfs://foo");
     expectLastCall().anyTimes();
     expect(sconf.get(Property.INSTANCE_ZK_HOST)).andReturn("zk1");
     expect(sconf.get(Property.INSTANCE_SECRET)).andReturn(Property.INSTANCE_SECRET.getDefaultValue());
@@ -144,10 +145,9 @@
     Initialize.checkInit(conf, fs, sconf);
   }
 
-  @SuppressWarnings("deprecation")
   @Test
   public void testCheckInit_OK() throws Exception {
-    expect(sconf.get(Property.INSTANCE_DFS_URI)).andReturn("hdfs://foo");
+    expect(sconf.get(INSTANCE_DFS_URI)).andReturn("hdfs://foo");
     expectLastCall().anyTimes();
     expect(sconf.get(Property.INSTANCE_ZK_HOST)).andReturn("zk1");
     expect(sconf.get(Property.INSTANCE_SECRET)).andReturn(Property.INSTANCE_SECRET.getDefaultValue());
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/BaseHostRegexTableLoadBalancerTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/BaseHostRegexTableLoadBalancerTest.java
index 7738c3a..b887f29 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/BaseHostRegexTableLoadBalancerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/BaseHostRegexTableLoadBalancerTest.java
@@ -17,7 +17,6 @@
 package org.apache.accumulo.server.master.balancer;
 
 import java.net.UnknownHostException;
-import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.HashMap;
 import java.util.List;
@@ -25,6 +24,7 @@
 import java.util.Map.Entry;
 import java.util.SortedMap;
 import java.util.TreeMap;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -45,8 +45,6 @@
 import org.apache.hadoop.io.Text;
 import org.easymock.EasyMock;
 
-import com.google.common.base.Predicate;
-
 public abstract class BaseHostRegexTableLoadBalancerTest extends HostRegexTableLoadBalancer {
 
   protected static class TestInstance implements Instance {
@@ -81,34 +79,6 @@
       return 30;
     }
 
-    @Deprecated
-    @Override
-    public Connector getConnector(String user, byte[] pass) throws AccumuloException, AccumuloSecurityException {
-      throw new UnsupportedOperationException();
-    }
-
-    @Deprecated
-    @Override
-    public Connector getConnector(String user, ByteBuffer pass) throws AccumuloException, AccumuloSecurityException {
-      throw new UnsupportedOperationException();
-    }
-
-    @Deprecated
-    @Override
-    public Connector getConnector(String user, CharSequence pass) throws AccumuloException, AccumuloSecurityException {
-      throw new UnsupportedOperationException();
-    }
-
-    @Deprecated
-    @Override
-    public AccumuloConfiguration getConfiguration() {
-      throw new UnsupportedOperationException();
-    }
-
-    @Deprecated
-    @Override
-    public void setConfiguration(AccumuloConfiguration conf) {}
-
     @Override
     public Connector getConnector(String principal, AuthenticationToken token) throws AccumuloException, AccumuloSecurityException {
       throw new UnsupportedOperationException();
@@ -164,7 +134,7 @@
         @Override
         public void getProperties(Map<String,String> props, Predicate<String> filter) {
           for (Entry<String,String> e : DEFAULT_TABLE_PROPERTIES.entrySet()) {
-            if (filter.apply(e.getKey())) {
+            if (filter.test(e.getKey())) {
               props.put(e.getKey(), e.getValue());
             }
           }
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/GroupBalancerTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/GroupBalancerTest.java
index f6c4e0d..24d8fe2 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/GroupBalancerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/GroupBalancerTest.java
@@ -27,6 +27,7 @@
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
+import java.util.function.Function;
 
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.master.thrift.TabletServerStatus;
@@ -38,7 +39,6 @@
 import org.junit.Assert;
 import org.junit.Test;
 
-import com.google.common.base.Function;
 import com.google.common.collect.Iterables;
 
 public class GroupBalancerTest {
@@ -80,14 +80,7 @@
 
         @Override
         protected Iterable<Pair<KeyExtent,Location>> getLocationProvider() {
-          return Iterables.transform(tabletLocs.entrySet(), new Function<Map.Entry<KeyExtent,TServerInstance>,Pair<KeyExtent,Location>>() {
-
-            @Override
-            public Pair<KeyExtent,Location> apply(final Entry<KeyExtent,TServerInstance> input) {
-              return new Pair<>(input.getKey(), new Location(input.getValue()));
-            }
-          });
-
+          return Iterables.transform(tabletLocs.entrySet(), input -> new Pair<>(input.getKey(), new Location(input.getValue())));
         }
 
         @Override
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerTest.java
index c0ccc48..77b52a9 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerTest.java
@@ -25,6 +25,7 @@
 import java.util.Map.Entry;
 import java.util.Set;
 import java.util.SortedMap;
+import java.util.function.Predicate;
 import java.util.regex.Pattern;
 
 import org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException;
@@ -44,8 +45,6 @@
 import org.junit.Assert;
 import org.junit.Test;
 
-import com.google.common.base.Predicate;
-
 public class HostRegexTableLoadBalancerTest extends BaseHostRegexTableLoadBalancerTest {
 
   @Test
@@ -131,7 +130,7 @@
           @Override
           public void getProperties(Map<String,String> props, Predicate<String> filter) {
             for (Entry<String,String> e : tableProperties.entrySet()) {
-              if (filter.apply(e.getKey())) {
+              if (filter.test(e.getKey())) {
                 props.put(e.getKey(), e.getValue());
               }
             }
@@ -203,7 +202,7 @@
           @Override
           public void getProperties(Map<String,String> props, Predicate<String> filter) {
             for (Entry<String,String> e : tableProperties.entrySet()) {
-              if (filter.apply(e.getKey())) {
+              if (filter.test(e.getKey())) {
                 props.put(e.getKey(), e.getValue());
               }
             }
diff --git a/server/base/src/test/java/org/apache/accumulo/server/rpc/TCredentialsUpdatingInvocationHandlerTest.java b/server/base/src/test/java/org/apache/accumulo/server/rpc/TCredentialsUpdatingInvocationHandlerTest.java
index 52eee25..1b87a9e 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/rpc/TCredentialsUpdatingInvocationHandlerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/rpc/TCredentialsUpdatingInvocationHandlerTest.java
@@ -20,6 +20,7 @@
 import java.util.Map;
 import java.util.UUID;
 import java.util.concurrent.ConcurrentHashMap;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
@@ -35,8 +36,6 @@
 import org.junit.Before;
 import org.junit.Test;
 
-import com.google.common.base.Predicate;
-
 public class TCredentialsUpdatingInvocationHandlerTest {
   private static final DefaultConfiguration DEFAULT_CONFIG = DefaultConfiguration.getInstance();
 
@@ -117,18 +116,6 @@
     proxy.updateArgs(new Object[] {new Object(), tcreds});
   }
 
-  @SuppressWarnings("deprecation")
-  @Test
-  public void testAllowedAnyImpersonationForAnyUser() throws Exception {
-    final String proxyServer = "proxy";
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".users", "*");
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".hosts", "*");
-    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
-    TCredentials tcreds = new TCredentials("client", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
-    UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
-    proxy.updateArgs(new Object[] {new Object(), tcreds});
-  }
-
   @Test
   public void testAllowedAnyImpersonationForAnyUserNewConfig() throws Exception {
     final String proxyServer = "proxy";
@@ -140,20 +127,6 @@
     proxy.updateArgs(new Object[] {new Object(), tcreds});
   }
 
-  @SuppressWarnings("deprecation")
-  @Test
-  public void testAllowedImpersonationForSpecificUsers() throws Exception {
-    final String proxyServer = "proxy";
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".users", "client1,client2");
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".hosts", "*");
-    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
-    TCredentials tcreds = new TCredentials("client1", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
-    UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
-    proxy.updateArgs(new Object[] {new Object(), tcreds});
-    tcreds = new TCredentials("client2", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
-    proxy.updateArgs(new Object[] {new Object(), tcreds});
-  }
-
   @Test
   public void testAllowedImpersonationForSpecificUsersNewConfig() throws Exception {
     final String proxyServer = "proxy";
@@ -167,19 +140,6 @@
     proxy.updateArgs(new Object[] {new Object(), tcreds});
   }
 
-  @SuppressWarnings("deprecation")
-  @Test(expected = ThriftSecurityException.class)
-  public void testDisallowedImpersonationForUser() throws Exception {
-    final String proxyServer = "proxy";
-    // let "otherproxy" impersonate, but not "proxy"
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + "otherproxy" + ".users", "*");
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + "otherproxy" + ".hosts", "*");
-    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
-    TCredentials tcreds = new TCredentials("client", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
-    UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
-    proxy.updateArgs(new Object[] {new Object(), tcreds});
-  }
-
   @Test(expected = ThriftSecurityException.class)
   public void testDisallowedImpersonationForUserNewConfig() throws Exception {
     final String proxyServer = "proxy";
@@ -192,21 +152,6 @@
     proxy.updateArgs(new Object[] {new Object(), tcreds});
   }
 
-  @SuppressWarnings("deprecation")
-  @Test(expected = ThriftSecurityException.class)
-  public void testDisallowedImpersonationForMultipleUsers() throws Exception {
-    final String proxyServer = "proxy";
-    // let "otherproxy" impersonate, but not "proxy"
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + "otherproxy1" + ".users", "*");
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + "otherproxy1" + ".hosts", "*");
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + "otherproxy2" + ".users", "client1,client2");
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + "otherproxy2" + ".hosts", "*");
-    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
-    TCredentials tcreds = new TCredentials("client1", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
-    UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
-    proxy.updateArgs(new Object[] {new Object(), tcreds});
-  }
-
   @Test(expected = ThriftSecurityException.class)
   public void testDisallowedImpersonationForMultipleUsersNewConfig() throws Exception {
     final String proxyServer = "proxy";
@@ -219,19 +164,6 @@
     proxy.updateArgs(new Object[] {new Object(), tcreds});
   }
 
-  @SuppressWarnings("deprecation")
-  @Test
-  public void testAllowedImpersonationFromSpecificHost() throws Exception {
-    final String proxyServer = "proxy", client = "client", host = "host.domain.com";
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".users", client);
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".hosts", host);
-    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
-    TCredentials tcreds = new TCredentials("client", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
-    UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
-    TServerUtils.clientAddress.set(host);
-    proxy.updateArgs(new Object[] {new Object(), tcreds});
-  }
-
   @Test
   public void testAllowedImpersonationFromSpecificHostNewConfig() throws Exception {
     final String proxyServer = "proxy", client = "client", host = "host.domain.com";
@@ -244,20 +176,6 @@
     proxy.updateArgs(new Object[] {new Object(), tcreds});
   }
 
-  @SuppressWarnings("deprecation")
-  @Test(expected = ThriftSecurityException.class)
-  public void testDisallowedImpersonationFromSpecificHost() throws Exception {
-    final String proxyServer = "proxy", client = "client", host = "host.domain.com";
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".users", client);
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".hosts", host);
-    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
-    TCredentials tcreds = new TCredentials("client", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
-    UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
-    // The RPC came from a different host than is allowed
-    TServerUtils.clientAddress.set("otherhost.domain.com");
-    proxy.updateArgs(new Object[] {new Object(), tcreds});
-  }
-
   @Test(expected = ThriftSecurityException.class)
   public void testDisallowedImpersonationFromSpecificHostNewConfig() throws Exception {
     final String proxyServer = "proxy", client = "client", host = "host.domain.com";
diff --git a/server/base/src/test/java/org/apache/accumulo/server/security/UserImpersonationTest.java b/server/base/src/test/java/org/apache/accumulo/server/security/UserImpersonationTest.java
index 7422db4..714a1c1 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/security/UserImpersonationTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/security/UserImpersonationTest.java
@@ -26,6 +26,7 @@
 import java.util.HashMap;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.ConfigurationCopy;
@@ -37,7 +38,6 @@
 import org.junit.Test;
 
 import com.google.common.base.Joiner;
-import com.google.common.base.Predicate;
 import com.google.common.collect.ImmutableMap;
 
 public class UserImpersonationTest {
@@ -67,19 +67,6 @@
     };
   }
 
-  void setValidHosts(String user, String hosts) {
-    setUsersOrHosts(user, ".hosts", hosts);
-  }
-
-  void setValidUsers(String user, String users) {
-    setUsersOrHosts(user, ".users", users);
-  }
-
-  @SuppressWarnings("deprecation")
-  void setUsersOrHosts(String user, String suffix, String value) {
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + user + suffix, value);
-  }
-
   void setValidHostsNewConfig(String user, String... hosts) {
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_HOST_IMPERSONATION.getKey(), Joiner.on(';').join(hosts));
   }
@@ -96,23 +83,6 @@
   }
 
   @Test
-  public void testAnyUserAndHosts() {
-    String server = "server";
-    setValidHosts(server, "*");
-    setValidUsers(server, "*");
-    UserImpersonation impersonation = new UserImpersonation(conf);
-
-    UsersWithHosts uwh = impersonation.get(server);
-    assertNotNull(uwh);
-
-    assertTrue(uwh.acceptsAllHosts());
-    assertTrue(uwh.acceptsAllUsers());
-
-    assertEquals(AlwaysTrueSet.class, uwh.getHosts().getClass());
-    assertEquals(AlwaysTrueSet.class, uwh.getUsers().getClass());
-  }
-
-  @Test
   public void testAnyUserAndHostsNewConfig() {
     String server = "server";
     setValidHostsNewConfig(server, "*");
@@ -130,22 +100,6 @@
   }
 
   @Test
-  public void testNoHostByDefault() {
-    String server = "server";
-    setValidUsers(server, "*");
-    UserImpersonation impersonation = new UserImpersonation(conf);
-
-    UsersWithHosts uwh = impersonation.get(server);
-    assertNotNull(uwh);
-
-    assertFalse(uwh.acceptsAllHosts());
-    assertTrue(uwh.acceptsAllUsers());
-
-    assertNotEquals(AlwaysTrueSet.class, uwh.getHosts().getClass());
-    assertEquals(AlwaysTrueSet.class, uwh.getUsers().getClass());
-  }
-
-  @Test
   public void testNoHostByDefaultNewConfig() {
     String server = "server";
     setValidUsersNewConfig(ImmutableMap.of(server, "*"));
@@ -162,22 +116,6 @@
   }
 
   @Test
-  public void testNoUsersByDefault() {
-    String server = "server";
-    setValidHosts(server, "*");
-    UserImpersonation impersonation = new UserImpersonation(conf);
-
-    UsersWithHosts uwh = impersonation.get(server);
-    assertNotNull(uwh);
-
-    assertTrue(uwh.acceptsAllHosts());
-    assertFalse(uwh.acceptsAllUsers());
-
-    assertEquals(AlwaysTrueSet.class, uwh.getHosts().getClass());
-    assertNotEquals(AlwaysTrueSet.class, uwh.getUsers().getClass());
-  }
-
-  @Test
   public void testNoUsersByDefaultNewConfig() {
     String server = "server";
     setValidHostsNewConfig(server, "*");
@@ -188,29 +126,6 @@
   }
 
   @Test
-  public void testSingleUserAndHost() {
-    String server = "server", host = "single_host.domain.com", client = "single_client";
-    setValidHosts(server, host);
-    setValidUsers(server, client);
-    UserImpersonation impersonation = new UserImpersonation(conf);
-
-    UsersWithHosts uwh = impersonation.get(server);
-    assertNotNull(uwh);
-
-    assertFalse(uwh.acceptsAllHosts());
-    assertFalse(uwh.acceptsAllUsers());
-
-    assertNotEquals(AlwaysTrueSet.class, uwh.getHosts().getClass());
-    assertNotEquals(AlwaysTrueSet.class, uwh.getUsers().getClass());
-
-    assertTrue(uwh.getUsers().contains(client));
-    assertTrue(uwh.getHosts().contains(host));
-
-    assertFalse(uwh.getUsers().contains("some_other_user"));
-    assertFalse(uwh.getHosts().contains("other_host.domain.com"));
-  }
-
-  @Test
   public void testSingleUserAndHostNewConfig() {
     String server = "server", host = "single_host.domain.com", client = "single_client";
     setValidHostsNewConfig(server, host);
@@ -234,28 +149,6 @@
   }
 
   @Test
-  public void testMultipleExplicitUsers() {
-    String server = "server", client1 = "client1", client2 = "client2", client3 = "client3";
-    setValidHosts(server, "*");
-    setValidUsers(server, Joiner.on(',').join(client1, client2, client3));
-    UserImpersonation impersonation = new UserImpersonation(conf);
-
-    UsersWithHosts uwh = impersonation.get(server);
-    assertNotNull(uwh);
-
-    assertTrue(uwh.acceptsAllHosts());
-    assertFalse(uwh.acceptsAllUsers());
-
-    assertEquals(AlwaysTrueSet.class, uwh.getHosts().getClass());
-    assertNotEquals(AlwaysTrueSet.class, uwh.getUsers().getClass());
-
-    assertTrue(uwh.getUsers().contains(client1));
-    assertTrue(uwh.getUsers().contains(client2));
-    assertTrue(uwh.getUsers().contains(client3));
-    assertFalse(uwh.getUsers().contains("other_client"));
-  }
-
-  @Test
   public void testMultipleExplicitUsersNewConfig() {
     String server = "server", client1 = "client1", client2 = "client2", client3 = "client3";
     setValidHostsNewConfig(server, "*");
@@ -278,28 +171,6 @@
   }
 
   @Test
-  public void testMultipleExplicitHosts() {
-    String server = "server", host1 = "host1", host2 = "host2", host3 = "host3";
-    setValidHosts(server, Joiner.on(',').join(host1, host2, host3));
-    setValidUsers(server, "*");
-    UserImpersonation impersonation = new UserImpersonation(conf);
-
-    UsersWithHosts uwh = impersonation.get(server);
-    assertNotNull(uwh);
-
-    assertFalse(uwh.acceptsAllHosts());
-    assertTrue(uwh.acceptsAllUsers());
-
-    assertNotEquals(AlwaysTrueSet.class, uwh.getHosts().getClass());
-    assertEquals(AlwaysTrueSet.class, uwh.getUsers().getClass());
-
-    assertTrue(uwh.getHosts().contains(host1));
-    assertTrue(uwh.getHosts().contains(host2));
-    assertTrue(uwh.getHosts().contains(host3));
-    assertFalse(uwh.getHosts().contains("other_host"));
-  }
-
-  @Test
   public void testMultipleExplicitHostsNewConfig() {
     String server = "server", host1 = "host1", host2 = "host2", host3 = "host3";
     setValidHostsNewConfig(server, Joiner.on(',').join(host1, host2, host3));
@@ -322,33 +193,6 @@
   }
 
   @Test
-  public void testMultipleExplicitUsersHosts() {
-    String server = "server", host1 = "host1", host2 = "host2", host3 = "host3", client1 = "client1", client2 = "client2", client3 = "client3";
-    setValidHosts(server, Joiner.on(',').join(host1, host2, host3));
-    setValidUsers(server, Joiner.on(',').join(client1, client2, client3));
-    UserImpersonation impersonation = new UserImpersonation(conf);
-
-    UsersWithHosts uwh = impersonation.get(server);
-    assertNotNull(uwh);
-
-    assertFalse(uwh.acceptsAllHosts());
-    assertFalse(uwh.acceptsAllUsers());
-
-    assertNotEquals(AlwaysTrueSet.class, uwh.getHosts().getClass());
-    assertNotEquals(AlwaysTrueSet.class, uwh.getUsers().getClass());
-
-    assertTrue(uwh.getUsers().contains(client1));
-    assertTrue(uwh.getUsers().contains(client2));
-    assertTrue(uwh.getUsers().contains(client3));
-    assertFalse(uwh.getUsers().contains("other_client"));
-
-    assertTrue(uwh.getHosts().contains(host1));
-    assertTrue(uwh.getHosts().contains(host2));
-    assertTrue(uwh.getHosts().contains(host3));
-    assertFalse(uwh.getHosts().contains("other_host"));
-  }
-
-  @Test
   public void testMultipleExplicitUsersHostsNewConfig() {
     String server = "server", host1 = "host1", host2 = "host2", host3 = "host3", client1 = "client1", client2 = "client2", client3 = "client3";
     setValidHostsNewConfig(server, Joiner.on(',').join(host1, host2, host3));
@@ -376,59 +220,6 @@
   }
 
   @Test
-  public void testMultipleAllowedImpersonators() {
-    String server1 = "server1", server2 = "server2", host1 = "host1", host2 = "host2", host3 = "host3", client1 = "client1", client2 = "client2", client3 = "client3";
-    // server1 can impersonate client1 and client2 from host1 or host2
-    setValidHosts(server1, Joiner.on(',').join(host1, host2));
-    setValidUsers(server1, Joiner.on(',').join(client1, client2));
-    // server2 can impersonate only client3 from host3
-    setValidHosts(server2, host3);
-    setValidUsers(server2, client3);
-    UserImpersonation impersonation = new UserImpersonation(conf);
-
-    UsersWithHosts uwh = impersonation.get(server1);
-    assertNotNull(uwh);
-
-    assertFalse(uwh.acceptsAllHosts());
-    assertFalse(uwh.acceptsAllUsers());
-
-    assertNotEquals(AlwaysTrueSet.class, uwh.getHosts().getClass());
-    assertNotEquals(AlwaysTrueSet.class, uwh.getUsers().getClass());
-
-    assertTrue(uwh.getUsers().contains(client1));
-    assertTrue(uwh.getUsers().contains(client2));
-    assertFalse(uwh.getUsers().contains(client3));
-    assertFalse(uwh.getUsers().contains("other_client"));
-
-    assertTrue(uwh.getHosts().contains(host1));
-    assertTrue(uwh.getHosts().contains(host2));
-    assertFalse(uwh.getHosts().contains(host3));
-    assertFalse(uwh.getHosts().contains("other_host"));
-
-    uwh = impersonation.get(server2);
-    assertNotNull(uwh);
-
-    assertFalse(uwh.acceptsAllHosts());
-    assertFalse(uwh.acceptsAllUsers());
-
-    assertNotEquals(AlwaysTrueSet.class, uwh.getHosts().getClass());
-    assertNotEquals(AlwaysTrueSet.class, uwh.getUsers().getClass());
-
-    assertFalse(uwh.getUsers().contains(client1));
-    assertFalse(uwh.getUsers().contains(client2));
-    assertTrue(uwh.getUsers().contains(client3));
-    assertFalse(uwh.getUsers().contains("other_client"));
-
-    assertFalse(uwh.getHosts().contains(host1));
-    assertFalse(uwh.getHosts().contains(host2));
-    assertTrue(uwh.getHosts().contains(host3));
-    assertFalse(uwh.getHosts().contains("other_host"));
-
-    // client3 is not allowed to impersonate anyone
-    assertNull(impersonation.get(client3));
-  }
-
-  @Test
   public void testMultipleAllowedImpersonatorsNewConfig() {
     String server1 = "server1", server2 = "server2", host1 = "host1", host2 = "host2", host3 = "host3", client1 = "client1", client2 = "client2", client3 = "client3";
     // server1 can impersonate client1 and client2 from host1 or host2
@@ -479,24 +270,6 @@
     assertNull(impersonation.get(client3));
   }
 
-  @SuppressWarnings("deprecation")
-  @Test
-  public void testSingleUser() throws Exception {
-    final String server = "server/hostname@EXAMPLE.COM", client = "client@EXAMPLE.COM";
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + server + ".users", client);
-    cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + server + ".hosts", "*");
-    UserImpersonation impersonation = new UserImpersonation(conf);
-
-    UsersWithHosts uwh = impersonation.get(server);
-
-    assertNotNull(uwh);
-
-    assertTrue(uwh.acceptsAllHosts());
-    assertFalse(uwh.acceptsAllUsers());
-
-    assertTrue(uwh.getUsers().contains(client));
-  }
-
   @Test
   public void testSingleUserNewConfig() throws Exception {
     final String server = "server/hostname@EXAMPLE.COM", client = "client@EXAMPLE.COM";
diff --git a/server/base/src/test/java/org/apache/accumulo/server/util/FileUtilTest.java b/server/base/src/test/java/org/apache/accumulo/server/util/FileUtilTest.java
index a826acf..4292eed 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/util/FileUtilTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/util/FileUtilTest.java
@@ -28,6 +28,7 @@
 import java.util.HashMap;
 import java.util.Iterator;
 import java.util.Map;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.DefaultConfiguration;
@@ -44,12 +45,12 @@
 import org.junit.rules.TemporaryFolder;
 import org.junit.rules.TestName;
 
-import com.google.common.base.Predicate;
-
 /**
  *
  */
 public class FileUtilTest {
+  @SuppressWarnings("deprecation")
+  private static Property INSTANCE_DFS_DIR = Property.INSTANCE_DFS_DIR;
 
   @Rule
   public TemporaryFolder tmpDir = new TemporaryFolder(new File(System.getProperty("user.dir") + "/target"));
@@ -83,7 +84,6 @@
     Assert.assertEquals("/bar", iter.next());
   }
 
-  @SuppressWarnings("deprecation")
   @Test
   public void testCleanupIndexOpWithDfsDir() throws IOException {
     // And a "unique" tmp directory for each volume
@@ -92,7 +92,7 @@
     Path tmpPath1 = new Path(tmp1.toURI());
 
     HashMap<Property,String> testProps = new HashMap<>();
-    testProps.put(Property.INSTANCE_DFS_DIR, accumuloDir.getAbsolutePath());
+    testProps.put(INSTANCE_DFS_DIR, accumuloDir.getAbsolutePath());
 
     AccumuloConfiguration testConf = new FileUtilTestConfiguration(testProps);
     VolumeManager fs = VolumeManagerImpl.getLocal(accumuloDir.getAbsolutePath());
diff --git a/server/base/src/test/java/org/apache/accumulo/server/util/TServerUtilsTest.java b/server/base/src/test/java/org/apache/accumulo/server/util/TServerUtilsTest.java
index 37d127a..fde9e6d 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/util/TServerUtilsTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/util/TServerUtilsTest.java
@@ -29,7 +29,6 @@
 import java.net.InetAddress;
 import java.net.ServerSocket;
 import java.net.UnknownHostException;
-import java.nio.ByteBuffer;
 import java.util.List;
 import java.util.concurrent.ExecutorService;
 
@@ -88,34 +87,6 @@
       return 30;
     }
 
-    @Deprecated
-    @Override
-    public Connector getConnector(String user, byte[] pass) throws AccumuloException, AccumuloSecurityException {
-      throw new UnsupportedOperationException();
-    }
-
-    @Deprecated
-    @Override
-    public Connector getConnector(String user, ByteBuffer pass) throws AccumuloException, AccumuloSecurityException {
-      throw new UnsupportedOperationException();
-    }
-
-    @Deprecated
-    @Override
-    public Connector getConnector(String user, CharSequence pass) throws AccumuloException, AccumuloSecurityException {
-      throw new UnsupportedOperationException();
-    }
-
-    @Deprecated
-    @Override
-    public AccumuloConfiguration getConfiguration() {
-      throw new UnsupportedOperationException();
-    }
-
-    @Deprecated
-    @Override
-    public void setConfiguration(AccumuloConfiguration conf) {}
-
     @Override
     public Connector getConnector(String principal, AuthenticationToken token) throws AccumuloException, AccumuloSecurityException {
       throw new UnsupportedOperationException();
diff --git a/server/gc/pom.xml b/server/gc/pom.xml
index 4aa31f7..50cdfd0 100644
--- a/server/gc/pom.xml
+++ b/server/gc/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-gc</artifactId>
diff --git a/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java b/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java
index 1593c75..3f2c381 100644
--- a/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java
+++ b/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java
@@ -23,7 +23,6 @@
 import java.net.UnknownHostException;
 import java.util.Iterator;
 import java.util.List;
-import java.util.Map;
 import java.util.Map.Entry;
 import java.util.Set;
 import java.util.SortedMap;
@@ -109,7 +108,6 @@
 import org.slf4j.LoggerFactory;
 
 import com.beust.jcommander.Parameter;
-import com.google.common.base.Function;
 import com.google.common.collect.Iterators;
 import com.google.common.collect.Maps;
 import com.google.common.net.HostAndPort;
@@ -276,12 +274,7 @@
 
       scanner.setRange(MetadataSchema.BlipSection.getRange());
 
-      return Iterators.transform(scanner.iterator(), new Function<Entry<Key,Value>,String>() {
-        @Override
-        public String apply(Entry<Key,Value> entry) {
-          return entry.getKey().getRow().toString().substring(MetadataSchema.BlipSection.getRowPrefix().length());
-        }
-      });
+      return Iterators.transform(scanner.iterator(), entry -> entry.getKey().getRow().toString().substring(MetadataSchema.BlipSection.getRowPrefix().length()));
     }
 
     @Override
@@ -292,12 +285,7 @@
       TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.fetch(scanner);
       TabletIterator tabletIterator = new TabletIterator(scanner, MetadataSchema.TabletsSection.getRange(), false, true);
 
-      return Iterators.concat(Iterators.transform(tabletIterator, new Function<Map<Key,Value>,Iterator<Entry<Key,Value>>>() {
-        @Override
-        public Iterator<Entry<Key,Value>> apply(Map<Key,Value> input) {
-          return input.entrySet().iterator();
-        }
-      }));
+      return Iterators.concat(Iterators.transform(tabletIterator, input -> input.entrySet().iterator()));
     }
 
     @Override
@@ -484,21 +472,16 @@
       try {
         Scanner s = ReplicationTable.getScanner(conn);
         StatusSection.limit(s);
-        return Iterators.transform(s.iterator(), new Function<Entry<Key,Value>,Entry<String,Status>>() {
-
-          @Override
-          public Entry<String,Status> apply(Entry<Key,Value> input) {
-            String file = input.getKey().getRow().toString();
-            Status stat;
-            try {
-              stat = Status.parseFrom(input.getValue().get());
-            } catch (InvalidProtocolBufferException e) {
-              log.warn("Could not deserialize protobuf for: " + input.getKey());
-              stat = null;
-            }
-            return Maps.immutableEntry(file, stat);
+        return Iterators.transform(s.iterator(), input -> {
+          String file = input.getKey().getRow().toString();
+          Status stat;
+          try {
+            stat = Status.parseFrom(input.getValue().get());
+          } catch (InvalidProtocolBufferException e) {
+            log.warn("Could not deserialize protobuf for: " + input.getKey());
+            stat = null;
           }
-
+          return Maps.immutableEntry(file, stat);
         });
       } catch (ReplicationTableOfflineException e) {
         // No elements that we need to preclude
diff --git a/server/master/pom.xml b/server/master/pom.xml
index 6af8344..51775a8 100644
--- a/server/master/pom.xml
+++ b/server/master/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-master</artifactId>
diff --git a/server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java b/server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java
index efea076..bb1ef92 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java
@@ -158,7 +158,7 @@
         String newTableName = validateTableNameArgument(arguments.get(1), tableOp, new Validator<String>() {
 
           @Override
-          public boolean apply(String argument) {
+          public boolean test(String argument) {
             // verify they are in the same namespace
             String oldNamespace = Tables.qualify(oldTableName).getFirst();
             return oldNamespace.equals(Tables.qualify(argument).getFirst());
diff --git a/server/master/src/main/java/org/apache/accumulo/master/Master.java b/server/master/src/main/java/org/apache/accumulo/master/Master.java
index 9633a9d..8233673 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/Master.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/Master.java
@@ -29,6 +29,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.Optional;
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
@@ -161,7 +162,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Optional;
 import com.google.common.collect.Iterables;
 
 /**
@@ -343,8 +343,7 @@
           zoo.putPersistentData(zooRoot + Constants.ZTABLES + "/" + id + Constants.ZTABLE_COMPACT_CANCEL_ID, zero, NodeExistsPolicy.SKIP);
         }
 
-        @SuppressWarnings("deprecation")
-        String zpath = zooRoot + Constants.ZCONFIG + "/" + Property.TSERV_WAL_SYNC_METHOD.getKey();
+        String zpath = zooRoot + Constants.ZCONFIG + "/tserver.wal.sync.method";
         // is the entire instance set to use flushing vs sync?
         boolean flushDefault = false;
         try {
@@ -358,8 +357,7 @@
         for (String id : zoo.getChildren(zooRoot + Constants.ZTABLES)) {
           log.debug("Converting table " + id + " WALog setting to Durability");
           try {
-            @SuppressWarnings("deprecation")
-            String path = zooRoot + Constants.ZTABLES + "/" + id + Constants.ZTABLE_CONF + "/" + Property.TABLE_WALOG_ENABLED.getKey();
+            String path = zooRoot + Constants.ZTABLES + "/" + id + Constants.ZTABLE_CONF + "/table.walog.enabled";
             byte[] data = zoo.getData(path, null);
             boolean useWAL = Boolean.parseBoolean(new String(data, UTF_8));
             zoo.recursiveDelete(path, NodeMissingPolicy.FAIL);
diff --git a/server/master/src/main/java/org/apache/accumulo/master/TabletGroupWatcher.java b/server/master/src/main/java/org/apache/accumulo/master/TabletGroupWatcher.java
index 76fda21..9d8a1d1 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/TabletGroupWatcher.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/TabletGroupWatcher.java
@@ -27,6 +27,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.Optional;
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.SortedSet;
@@ -93,7 +94,6 @@
 import org.apache.hadoop.io.Text;
 import org.apache.thrift.TException;
 
-import com.google.common.base.Optional;
 import com.google.common.collect.ImmutableSortedSet;
 import com.google.common.collect.Iterators;
 
@@ -181,8 +181,8 @@
         SortedMap<TServerInstance,TabletServerStatus> destinations = new TreeMap<>(currentTServers);
         destinations.keySet().removeAll(this.master.serversToShutdown);
 
-        List<Assignment> assignments = new ArrayList<Assignment>();
-        List<Assignment> assigned = new ArrayList<Assignment>();
+        List<Assignment> assignments = new ArrayList<>();
+        List<Assignment> assigned = new ArrayList<>();
         List<TabletLocationState> assignedToDeadServers = new ArrayList<>();
         List<TabletLocationState> suspendedToGoneServers = new ArrayList<>();
         Map<KeyExtent,TServerInstance> unassigned = new HashMap<>();
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/ChooseDir.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/ChooseDir.java
index 50bf19d..ef3d0df 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/ChooseDir.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/ChooseDir.java
@@ -16,14 +16,14 @@
  */
 package org.apache.accumulo.master.tableOps;
 
+import java.util.Optional;
+
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.master.Master;
 import org.apache.accumulo.server.ServerConstants;
 import org.apache.hadoop.fs.Path;
 
-import com.google.common.base.Optional;
-
 class ChooseDir extends MasterRepo {
   private static final long serialVersionUID = 1L;
 
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/PopulateMetadataTable.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/PopulateMetadataTable.java
index 81e28b6..0c251b1 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/PopulateMetadataTable.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/PopulateMetadataTable.java
@@ -25,6 +25,7 @@
 import java.io.InputStreamReader;
 import java.util.HashMap;
 import java.util.Map;
+import java.util.Optional;
 import java.util.zip.ZipEntry;
 import java.util.zip.ZipInputStream;
 
@@ -52,8 +53,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Optional;
-
 class PopulateMetadataTable extends MasterRepo {
   private static final Logger log = LoggerFactory.getLogger(PopulateMetadataTable.class);
 
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java
index 2baf7ac..33dbb18 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java
@@ -19,6 +19,7 @@
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.math.BigInteger;
+import java.util.Base64;
 import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReentrantLock;
 
@@ -29,7 +30,6 @@
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.DistributedReadWriteLock;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
@@ -120,7 +120,7 @@
   public static long reserveHdfsDirectory(String directory, long tid) throws KeeperException, InterruptedException {
     Instance instance = HdfsZooInstance.getInstance();
 
-    String resvPath = ZooUtil.getRoot(instance) + Constants.ZHDFS_RESERVATIONS + "/" + Base64.encodeBase64String(directory.getBytes(UTF_8));
+    String resvPath = ZooUtil.getRoot(instance) + Constants.ZHDFS_RESERVATIONS + "/" + Base64.getEncoder().encodeToString(directory.getBytes(UTF_8));
 
     IZooReaderWriter zk = ZooReaderWriter.getInstance();
 
@@ -132,7 +132,7 @@
 
   public static void unreserveHdfsDirectory(String directory, long tid) throws KeeperException, InterruptedException {
     Instance instance = HdfsZooInstance.getInstance();
-    String resvPath = ZooUtil.getRoot(instance) + Constants.ZHDFS_RESERVATIONS + "/" + Base64.encodeBase64String(directory.getBytes(UTF_8));
+    String resvPath = ZooUtil.getRoot(instance) + Constants.ZHDFS_RESERVATIONS + "/" + Base64.getEncoder().encodeToString(directory.getBytes(UTF_8));
     ZooReservation.release(ZooReaderWriter.getInstance(), resvPath, String.format("%016x", tid));
   }
 
diff --git a/server/master/src/main/java/org/apache/accumulo/master/util/FateAdmin.java b/server/master/src/main/java/org/apache/accumulo/master/util/FateAdmin.java
deleted file mode 100644
index a1dd303..0000000
--- a/server/master/src/main/java/org/apache/accumulo/master/util/FateAdmin.java
+++ /dev/null
@@ -1,104 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.master.util;
-
-import java.util.ArrayList;
-import java.util.LinkedHashMap;
-import java.util.List;
-import java.util.Map.Entry;
-
-import org.apache.accumulo.core.Constants;
-import org.apache.accumulo.core.cli.Help;
-import org.apache.accumulo.core.client.Instance;
-import org.apache.accumulo.core.zookeeper.ZooUtil;
-import org.apache.accumulo.fate.AdminUtil;
-import org.apache.accumulo.fate.ReadOnlyStore;
-import org.apache.accumulo.fate.ZooStore;
-import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
-import org.apache.accumulo.master.Master;
-import org.apache.accumulo.server.client.HdfsZooInstance;
-import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
-
-import com.beust.jcommander.JCommander;
-import com.beust.jcommander.Parameter;
-import com.beust.jcommander.Parameters;
-
-/**
- * A utility to administer FATE operations
- */
-public class FateAdmin {
-
-  static class TxOpts {
-    @Parameter(description = "<txid>...", required = true)
-    List<String> txids = new ArrayList<>();
-  }
-
-  @Parameters(commandDescription = "Stop an existing FATE by transaction id")
-  static class FailOpts extends TxOpts {}
-
-  @Parameters(commandDescription = "Delete an existing FATE by transaction id")
-  static class DeleteOpts extends TxOpts {}
-
-  @Parameters(commandDescription = "List the existing FATE transactions")
-  static class PrintOpts {}
-
-  public static void main(String[] args) throws Exception {
-    Help opts = new Help();
-    JCommander jc = new JCommander(opts);
-    jc.setProgramName(FateAdmin.class.getName());
-    LinkedHashMap<String,TxOpts> txOpts = new LinkedHashMap<>(2);
-    txOpts.put("fail", new FailOpts());
-    txOpts.put("delete", new DeleteOpts());
-    for (Entry<String,TxOpts> entry : txOpts.entrySet()) {
-      jc.addCommand(entry.getKey(), entry.getValue());
-    }
-    jc.addCommand("print", new PrintOpts());
-    jc.parse(args);
-    if (opts.help || jc.getParsedCommand() == null) {
-      jc.usage();
-      System.exit(1);
-    }
-
-    System.err
-        .printf("This tool has been deprecated%nFATE administration now available within 'accumulo shell'%n$ fate fail <txid>... | delete <txid>... | print [<txid>...]%n%n");
-
-    AdminUtil<Master> admin = new AdminUtil<>();
-
-    Instance instance = HdfsZooInstance.getInstance();
-    String path = ZooUtil.getRoot(instance) + Constants.ZFATE;
-    String masterPath = ZooUtil.getRoot(instance) + Constants.ZMASTER_LOCK;
-    IZooReaderWriter zk = ZooReaderWriter.getInstance();
-    ZooStore<Master> zs = new ZooStore<>(path, zk);
-
-    if (jc.getParsedCommand().equals("fail")) {
-      for (String txid : txOpts.get(jc.getParsedCommand()).txids) {
-        if (!admin.prepFail(zs, zk, masterPath, txid)) {
-          System.exit(1);
-        }
-      }
-    } else if (jc.getParsedCommand().equals("delete")) {
-      for (String txid : txOpts.get(jc.getParsedCommand()).txids) {
-        if (!admin.prepDelete(zs, zk, masterPath, txid)) {
-          System.exit(1);
-        }
-        admin.deleteLocks(zs, zk, ZooUtil.getRoot(instance) + Constants.ZTABLE_LOCKS, txid);
-      }
-    } else if (jc.getParsedCommand().equals("print")) {
-      admin.print(new ReadOnlyStore<>(zs), zk, ZooUtil.getRoot(instance) + Constants.ZTABLE_LOCKS);
-    }
-  }
-}
diff --git a/server/master/src/main/java/org/apache/accumulo/master/util/TableValidators.java b/server/master/src/main/java/org/apache/accumulo/master/util/TableValidators.java
index 2a26fb0..9a22c31 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/util/TableValidators.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/util/TableValidators.java
@@ -35,7 +35,7 @@
 
   public static final Validator<String> VALID_NAME = new Validator<String>() {
     @Override
-    public boolean apply(String tableName) {
+    public boolean test(String tableName) {
       return tableName != null && tableName.matches(VALID_NAME_REGEX);
     }
 
@@ -49,7 +49,7 @@
 
   public static final Validator<String> VALID_ID = new Validator<String>() {
     @Override
-    public boolean apply(String tableId) {
+    public boolean test(String tableId) {
       return tableId != null
           && (RootTable.ID.equals(tableId) || MetadataTable.ID.equals(tableId) || ReplicationTable.ID.equals(tableId) || tableId.matches(VALID_ID_REGEX));
     }
@@ -67,7 +67,7 @@
     private List<String> metadataTables = Arrays.asList(RootTable.NAME, MetadataTable.NAME);
 
     @Override
-    public boolean apply(String tableName) {
+    public boolean test(String tableName) {
       return !metadataTables.contains(tableName);
     }
 
@@ -80,7 +80,7 @@
   public static final Validator<String> NOT_SYSTEM = new Validator<String>() {
 
     @Override
-    public boolean apply(String tableName) {
+    public boolean test(String tableName) {
       return !Namespaces.ACCUMULO_NAMESPACE.equals(qualify(tableName).getFirst());
     }
 
@@ -93,7 +93,7 @@
   public static final Validator<String> NOT_ROOT = new Validator<String>() {
 
     @Override
-    public boolean apply(String tableName) {
+    public boolean test(String tableName) {
       return !RootTable.NAME.equals(tableName);
     }
 
@@ -106,7 +106,7 @@
   public static final Validator<String> NOT_ROOT_ID = new Validator<String>() {
 
     @Override
-    public boolean apply(String tableId) {
+    public boolean test(String tableId) {
       return !RootTable.ID.equals(tableId);
     }
 
diff --git a/server/master/src/test/java/org/apache/accumulo/master/tableOps/ImportTableTest.java b/server/master/src/test/java/org/apache/accumulo/master/tableOps/ImportTableTest.java
index 080e0af..fc82f07 100644
--- a/server/master/src/test/java/org/apache/accumulo/master/tableOps/ImportTableTest.java
+++ b/server/master/src/test/java/org/apache/accumulo/master/tableOps/ImportTableTest.java
@@ -16,14 +16,14 @@
  */
 package org.apache.accumulo.master.tableOps;
 
+import java.util.Optional;
+
 import org.apache.accumulo.master.Master;
 import org.apache.accumulo.server.fs.VolumeManager;
 import org.easymock.EasyMock;
 import org.junit.Assert;
 import org.junit.Test;
 
-import com.google.common.base.Optional;
-
 /**
  *
  */
diff --git a/server/monitor/pom.xml b/server/monitor/pom.xml
index 6c56e59..f42a697 100644
--- a/server/monitor/pom.xml
+++ b/server/monitor/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-monitor</artifactId>
@@ -41,10 +41,6 @@
       <artifactId>guava</artifactId>
     </dependency>
     <dependency>
-      <groupId>commons-codec</groupId>
-      <artifactId>commons-codec</artifactId>
-    </dependency>
-    <dependency>
       <groupId>javax.servlet</groupId>
       <artifactId>javax.servlet-api</artifactId>
     </dependency>
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ShellServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ShellServlet.java
index 31bea15..0aa9d72 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ShellServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ShellServlet.java
@@ -81,7 +81,6 @@
       // user attribute is null, check to see if username and password are passed as parameters
       user = req.getParameter("user");
       String pass = req.getParameter("pass");
-      String mock = req.getParameter("mock");
       if (user == null || pass == null) {
         // username or password are null, re-authenticate
         sb.append(authenticationForm(req.getRequestURI(), CSRF_TOKEN));
@@ -89,7 +88,7 @@
       }
       try {
         // get a new shell for this user
-        ShellExecutionThread shellThread = new ShellExecutionThread(user, pass, mock);
+        ShellExecutionThread shellThread = new ShellExecutionThread(user, pass);
         service().submit(shellThread);
         userShells().put(session.getId(), shellThread);
       } catch (IOException e) {
@@ -224,8 +223,7 @@
 
   private String authenticationForm(String requestURI, String csrfToken) {
     return "<div id='login'><form method=POST action='" + requestURI + "'>"
-        + "<table><tr><td>Mock:&nbsp</td><td><input type='checkbox' name='mock' value='mock'></td></tr>"
-        + "<tr><td>Username:&nbsp;</td><td><input type='text' name='user'></td></tr>"
+        + "<table><tr><td>Username:&nbsp;</td><td><input type='text' name='user'></td></tr>"
         + "<tr><td>Password:&nbsp;</td><td><input type='password' name='pass'></td><td>" + "<input type='hidden' name='" + CSRF_KEY + "' value='" + csrfToken
         + "'/><input type='submit' value='Enter'></td></tr></table></form></div>";
   }
@@ -255,7 +253,7 @@
     private boolean done;
     private boolean readWait;
 
-    private ShellExecutionThread(String username, String password, String mock) throws IOException {
+    private ShellExecutionThread(String username, String password) throws IOException {
       this.done = false;
       this.cmd = null;
       this.cmdIndex = 0;
@@ -264,10 +262,7 @@
       ConsoleReader reader = new ConsoleReader(this, output);
       this.shell = new Shell(reader);
       shell.setLogErrorsToConsole();
-      if (mock != null) {
-        if (shell.config("--fake", "-u", username, "-p", password))
-          throw new IOException("mock shell config error");
-      } else if (shell.config("-u", username, "-p", password)) {
+      if (shell.config("-u", username, "-p", password)) {
         throw new IOException("shell config error");
       }
     }
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java
index 858192b..d2b695d 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java
@@ -20,6 +20,7 @@
 import java.security.MessageDigest;
 import java.text.DateFormat;
 import java.util.ArrayList;
+import java.util.Base64;
 import java.util.List;
 import java.util.Map.Entry;
 
@@ -37,7 +38,6 @@
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService;
 import org.apache.accumulo.core.tabletserver.thrift.TabletStats;
 import org.apache.accumulo.core.trace.Tracer;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.util.Duration;
 import org.apache.accumulo.monitor.Monitor;
 import org.apache.accumulo.monitor.util.Table;
@@ -171,7 +171,7 @@
       if (extent.getEndRow() != null && extent.getEndRow().getLength() > 0) {
         digester.update(extent.getEndRow().getBytes(), 0, extent.getEndRow().getLength());
       }
-      String obscuredExtent = Base64.encodeBase64String(digester.digest());
+      String obscuredExtent = Base64.getEncoder().encodeToString(digester.digest());
       String displayExtent = String.format("<code>[%s]</code>", obscuredExtent);
 
       TableRow row = perTabletResults.prepareRow();
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/NullScanner.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/NullScanner.java
index b91d454..d96d23f 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/NullScanner.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/NullScanner.java
@@ -53,16 +53,6 @@
   @Override
   public void clearScanIterators() {}
 
-  @Deprecated
-  @Override
-  public void setTimeOut(int timeOut) {}
-
-  @Deprecated
-  @Override
-  public int getTimeOut() {
-    return 0;
-  }
-
   @Override
   public void setRange(Range range) {}
 
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/util/TableColumn.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/util/TableColumn.java
index 1e00927..86ba393 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/util/TableColumn.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/util/TableColumn.java
@@ -26,7 +26,7 @@
 
   public TableColumn(String title, CellType<T> type, String legend) {
     this.title = title;
-    this.type = type != null ? type : new StringType<T>();
+    this.type = type != null ? type : new StringType<>();
     this.legend = legend;
   }
 
diff --git a/server/native/pom.xml b/server/native/pom.xml
index d0e5ee8..8a1ff4c 100644
--- a/server/native/pom.xml
+++ b/server/native/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-native</artifactId>
diff --git a/server/tracer/pom.xml b/server/tracer/pom.xml
index cd69d9d..a01916d 100644
--- a/server/tracer/pom.xml
+++ b/server/tracer/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-tracer</artifactId>
diff --git a/server/tracer/src/test/java/org/apache/accumulo/tracer/TracerTest.java b/server/tracer/src/test/java/org/apache/accumulo/tracer/TracerTest.java
index 6dcf7a4..19891a4 100644
--- a/server/tracer/src/test/java/org/apache/accumulo/tracer/TracerTest.java
+++ b/server/tracer/src/test/java/org/apache/accumulo/tracer/TracerTest.java
@@ -99,7 +99,6 @@
     public void close() throws IOException {}
   }
 
-  @SuppressWarnings("deprecation")
   @Test
   public void testTrace() throws Exception {
     TestReceiver tracer = new TestReceiver();
@@ -115,7 +114,7 @@
     assertFalse(Trace.isTracing());
 
     Span start = Trace.on("testing");
-    assertEquals(Trace.currentTrace().getSpan(), start.getScope().getSpan());
+    assertEquals(org.apache.htrace.Trace.currentSpan(), start.getScope().getSpan());
     assertTrue(Trace.isTracing());
 
     Span span = Trace.start("shortest trace ever");
diff --git a/server/tserver/pom.xml b/server/tserver/pom.xml
index 062ee1a..0a8959c 100644
--- a/server/tserver/pom.xml
+++ b/server/tserver/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-tserver</artifactId>
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/InMemoryMap.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/InMemoryMap.java
index c1ae9e6..9d5e0d0 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/InMemoryMap.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/InMemoryMap.java
@@ -78,7 +78,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Predicate;
 import com.google.common.collect.Iterables;
 
 public class InMemoryMap {
@@ -859,12 +858,7 @@
   }
 
   private AccumuloConfiguration createSampleConfig(AccumuloConfiguration siteConf) {
-    ConfigurationCopy confCopy = new ConfigurationCopy(Iterables.filter(siteConf, new Predicate<Entry<String,String>>() {
-      @Override
-      public boolean apply(Entry<String,String> input) {
-        return !input.getKey().startsWith(Property.TABLE_SAMPLER.getKey());
-      }
-    }));
+    ConfigurationCopy confCopy = new ConfigurationCopy(Iterables.filter(siteConf, input -> !input.getKey().startsWith(Property.TABLE_SAMPLER.getKey())));
 
     for (Entry<String,String> entry : samplerRef.get().getFirst().toTablePropertiesMap().entrySet()) {
       confCopy.set(entry.getKey(), entry.getValue());
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServerResourceManager.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServerResourceManager.java
index 97606ea..089bd12 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServerResourceManager.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServerResourceManager.java
@@ -410,7 +410,7 @@
           synchronized (tabletReports) {
             tabletReportsCopy = new HashMap<>(tabletReports);
           }
-          ArrayList<TabletState> tabletStates = new ArrayList<TabletState>(tabletReportsCopy.values());
+          ArrayList<TabletState> tabletStates = new ArrayList<>(tabletReportsCopy.values());
           mma = memoryManager.getMemoryManagementActions(tabletStates);
 
         } catch (Throwable t) {
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/TservConstraintEnv.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/TservConstraintEnv.java
index fc371c9..5a50e8f 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/TservConstraintEnv.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/TservConstraintEnv.java
@@ -24,7 +24,6 @@
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.security.AuthorizationContainer;
-import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.thrift.TCredentials;
 import org.apache.accumulo.server.security.SecurityOperation;
 
@@ -32,7 +31,6 @@
 
   private final TCredentials credentials;
   private final SecurityOperation security;
-  private Authorizations auths;
   private KeyExtent ke;
 
   TservConstraintEnv(SecurityOperation secOp, TCredentials credentials) {
@@ -55,18 +53,6 @@
   }
 
   @Override
-  @Deprecated
-  public Authorizations getAuthorizations() {
-    if (auths == null)
-      try {
-        this.auths = security.getUserAuthorizations(credentials);
-      } catch (ThriftSecurityException e) {
-        throw new RuntimeException(e);
-      }
-    return auths;
-  }
-
-  @Override
   public AuthorizationContainer getAuthorizationsContainer() {
     return new AuthorizationContainer() {
       @Override
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/DfsLogger.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/DfsLogger.java
index 5280d41..083909e 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/DfsLogger.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/DfsLogger.java
@@ -36,6 +36,7 @@
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
+import java.util.Optional;
 import java.util.UUID;
 import java.util.concurrent.CountDownLatch;
 import java.util.concurrent.LinkedBlockingQueue;
@@ -68,7 +69,6 @@
 import org.slf4j.LoggerFactory;
 
 import com.google.common.base.Joiner;
-import com.google.common.base.Optional;
 
 /**
  * Wrap a connection to a logger.
@@ -438,7 +438,7 @@
     log.debug("DfsLogger.open() begin");
     VolumeManager fs = conf.getFileSystem();
 
-    logPath = fs.choose(Optional.<String> absent(), ServerConstants.getBaseUris()) + Path.SEPARATOR + ServerConstants.WAL_DIR + Path.SEPARATOR + logger
+    logPath = fs.choose(Optional.<String> empty(), ServerConstants.getBaseUris()) + Path.SEPARATOR + ServerConstants.WAL_DIR + Path.SEPARATOR + logger
         + Path.SEPARATOR + filename;
 
     metaReference = toString();
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/LocalWALRecovery.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/LocalWALRecovery.java
index 2667b53..a3d8782 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/LocalWALRecovery.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/LocalWALRecovery.java
@@ -48,7 +48,6 @@
 /**
  * This class will attempt to rewrite any local WALs to HDFS.
  */
-@SuppressWarnings("deprecation")
 public class LocalWALRecovery implements Runnable {
   private static final Logger log = LoggerFactory.getLogger(LocalWALRecovery.class);
 
@@ -154,8 +153,8 @@
           Path localWal = new Path(file.toURI());
           FileSystem localFs = FileSystem.getLocal(fs.getConf());
 
-          Reader reader = new SequenceFile.Reader(localFs, localWal, localFs.getConf());
-          // Reader reader = new SequenceFile.Reader(localFs.getConf(), SequenceFile.Reader.file(localWal));
+          Reader reader = new SequenceFile.Reader(localFs.getConf(), SequenceFile.Reader.file(localWal.makeQualified(localWal.toUri(),
+              localFs.getWorkingDirectory())));
           Path tmp = new Path(options.destination + "/" + name + ".copy");
           FSDataOutputStream writer = fs.create(tmp);
           while (reader.next(key, value)) {
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanDataSource.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanDataSource.java
index e48d91e..dd8c020 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanDataSource.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanDataSource.java
@@ -49,11 +49,14 @@
 import org.apache.accumulo.tserver.InMemoryMap.MemoryIterator;
 import org.apache.accumulo.tserver.TabletIteratorEnvironment;
 import org.apache.accumulo.tserver.TabletServer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import com.google.common.collect.Iterables;
 
 class ScanDataSource implements DataSource {
 
+  private static final Logger log = LoggerFactory.getLogger(ScanDataSource.class);
   // data source state
   private final Tablet tablet;
   private ScanFileManager fileManager;
@@ -76,6 +79,8 @@
     this.options = new ScanOptions(-1, authorizations, defaultLabels, columnSet, ssiList, ssio, interruptFlag, false, samplerConfig, batchTimeOut, context);
     this.interruptFlag = interruptFlag;
     this.loadIters = true;
+    log.debug("new scan data source, tablet: {}, options: {}, interruptFlag: {}, loadIterators: {}", this.tablet, this.options, this.interruptFlag,
+        this.loadIters);
   }
 
   ScanDataSource(Tablet tablet, ScanOptions options) {
@@ -84,6 +89,8 @@
     this.options = options;
     this.interruptFlag = options.getInterruptFlag();
     this.loadIters = true;
+    log.debug("new scan data source, tablet: {}, options: {}, interruptFlag: {}, loadIterators: {}", this.tablet, this.options, this.interruptFlag,
+        this.loadIters);
   }
 
   ScanDataSource(Tablet tablet, Authorizations authorizations, byte[] defaultLabels, AtomicBoolean iFlag) {
@@ -92,6 +99,8 @@
     this.options = new ScanOptions(-1, authorizations, defaultLabels, EMPTY_COLS, null, null, iFlag, false, null, -1, null);
     this.interruptFlag = iFlag;
     this.loadIters = false;
+    log.debug("new scan data source, tablet: {}, options: {}, interruptFlag: {}, loadIterators: {}", this.tablet, this.options, this.interruptFlag,
+        this.loadIters);
   }
 
   @Override
@@ -187,9 +196,11 @@
     if (!loadIters) {
       return visFilter;
     } else if (null == options.getClassLoaderContext()) {
+      log.trace("Loading iterators for scan");
       return iterEnv.getTopLevelIterator(IteratorUtil.loadIterators(IteratorScope.scan, visFilter, tablet.getExtent(), tablet.getTableConfiguration(),
           options.getSsiList(), options.getSsio(), iterEnv));
     } else {
+      log.trace("Loading iterators for scan with scan context: {}", options.getClassLoaderContext());
       return iterEnv.getTopLevelIterator(IteratorUtil.loadIterators(IteratorScope.scan, visFilter, tablet.getExtent(), tablet.getTableConfiguration(),
           options.getSsiList(), options.getSsio(), iterEnv, true, options.getClassLoaderContext()));
     }
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanOptions.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanOptions.java
index dceac08..a898653 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanOptions.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanOptions.java
@@ -109,4 +109,20 @@
   public void setClassLoaderContext(String context) {
     this.classLoaderContext = context;
   }
+
+  @Override
+  public String toString() {
+    StringBuilder buf = new StringBuilder();
+    buf.append("[");
+    buf.append("auths=").append(this.authorizations);
+    buf.append(", batchTimeOut=").append(this.batchTimeOut);
+    buf.append(", context=").append(this.classLoaderContext);
+    buf.append(", columns=").append(this.columnSet);
+    buf.append(", interruptFlag=").append(this.interruptFlag);
+    buf.append(", isolated=").append(this.isolated);
+    buf.append(", num=").append(this.num);
+    buf.append(", samplerConfig=").append(this.samplerConfig);
+    buf.append("]");
+    return buf.toString();
+  }
 }
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Tablet.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Tablet.java
index 6637521..f0c0695 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Tablet.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Tablet.java
@@ -35,6 +35,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.Optional;
 import java.util.PriorityQueue;
 import java.util.Set;
 import java.util.SortedMap;
@@ -151,7 +152,6 @@
 import org.apache.zookeeper.KeeperException.NoNodeException;
 
 import com.google.common.annotations.VisibleForTesting;
-import com.google.common.base.Optional;
 import com.google.common.cache.Cache;
 import com.google.common.cache.CacheBuilder;
 
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/LargestFirstMemoryManagerTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/LargestFirstMemoryManagerTest.java
index 82ec8ec..9f7ba5c 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/LargestFirstMemoryManagerTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/LargestFirstMemoryManagerTest.java
@@ -20,6 +20,7 @@
 
 import java.util.Arrays;
 import java.util.List;
+import java.util.function.Function;
 
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
@@ -37,8 +38,6 @@
 import org.junit.Before;
 import org.junit.Test;
 
-import com.google.common.base.Function;
-
 public class LargestFirstMemoryManagerTest {
 
   private static final long ZERO = System.currentTimeMillis();
@@ -176,12 +175,7 @@
   @Test
   public void testDeletedTable() throws Exception {
     final String deletedTableId = "1";
-    Function<String,Boolean> existenceCheck = new Function<String,Boolean>() {
-      @Override
-      public Boolean apply(String tableId) {
-        return !deletedTableId.equals(tableId);
-      }
-    };
+    Function<String,Boolean> existenceCheck = tableId -> !deletedTableId.equals(tableId);
     LargestFirstMemoryManagerWithExistenceCheck mgr = new LargestFirstMemoryManagerWithExistenceCheck(existenceCheck);
     ServerConfiguration config = new ServerConfiguration() {
       ServerConfigurationFactory delegate = new ServerConfigurationFactory(inst);
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/TabletServerSyncCheckTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/TabletServerSyncCheckTest.java
index 65282bb..dd0be57 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/TabletServerSyncCheckTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/TabletServerSyncCheckTest.java
@@ -19,6 +19,7 @@
 import java.io.IOException;
 import java.util.Collections;
 import java.util.Map;
+import java.util.Optional;
 
 import org.apache.accumulo.core.conf.ConfigurationCopy;
 import org.apache.accumulo.core.data.Key;
@@ -35,7 +36,6 @@
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.junit.Test;
 
-import com.google.common.base.Optional;
 import com.google.common.collect.ImmutableMap;
 
 public class TabletServerSyncCheckTest {
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/log/SortedLogRecoveryTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/log/SortedLogRecoveryTest.java
index b65d5ce..d9a4e0e 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/log/SortedLogRecoveryTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/log/SortedLogRecoveryTest.java
@@ -138,8 +138,7 @@
       for (Entry<String,KeyValue[]> entry : logs.entrySet()) {
         String path = workdir + "/" + entry.getKey();
         FileSystem ns = fs.getVolumeByPath(new Path(path)).getFileSystem();
-        @SuppressWarnings("deprecation")
-        Writer map = new MapFile.Writer(ns.getConf(), ns, path + "/log1", LogFileKey.class, LogFileValue.class);
+        Writer map = new MapFile.Writer(ns.getConf(), new Path(path + "/log1"), Writer.keyClass(LogFileKey.class), Writer.valueClass(LogFileValue.class));
         for (KeyValue lfe : entry.getValue()) {
           map.append(lfe.key, lfe.value);
         }
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/RootFilesTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/RootFilesTest.java
index d9c6862..7a1a785 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/RootFilesTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/RootFilesTest.java
@@ -41,6 +41,10 @@
  *
  */
 public class RootFilesTest {
+  @SuppressWarnings("deprecation")
+  private static final Property INSTANCE_DFS_DIR = Property.INSTANCE_DFS_DIR;
+  @SuppressWarnings("deprecation")
+  private static final Property INSTANCE_DFS_URI = Property.INSTANCE_DFS_URI;
 
   @Rule
   public TemporaryFolder tempFolder = new TemporaryFolder(new File(System.getProperty("user.dir") + "/target"));
@@ -116,13 +120,12 @@
     }
   }
 
-  @SuppressWarnings("deprecation")
   @Test
   public void testFileReplacement() throws IOException {
 
     ConfigurationCopy conf = new ConfigurationCopy();
-    conf.set(Property.INSTANCE_DFS_URI, "file:///");
-    conf.set(Property.INSTANCE_DFS_DIR, "/");
+    conf.set(INSTANCE_DFS_URI, "file:///");
+    conf.set(INSTANCE_DFS_DIR, "/");
 
     VolumeManager vm = VolumeManagerImpl.get(conf);
 
diff --git a/shell/pom.xml b/shell/pom.xml
index 2dee4c2..77c2553 100644
--- a/shell/pom.xml
+++ b/shell/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-shell</artifactId>
   <name>Apache Accumulo Shell</name>
@@ -48,10 +48,6 @@
       <artifactId>commons-cli</artifactId>
     </dependency>
     <dependency>
-      <groupId>commons-codec</groupId>
-      <artifactId>commons-codec</artifactId>
-    </dependency>
-    <dependency>
       <groupId>commons-collections</groupId>
       <artifactId>commons-collections</artifactId>
     </dependency>
diff --git a/shell/src/main/java/org/apache/accumulo/shell/Shell.java b/shell/src/main/java/org/apache/accumulo/shell/Shell.java
index 7678ead..1819f43 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/Shell.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/Shell.java
@@ -66,7 +66,6 @@
 import org.apache.accumulo.core.tabletserver.thrift.ConstraintViolationException;
 import org.apache.accumulo.core.trace.DistributedTrace;
 import org.apache.accumulo.core.util.BadArgumentException;
-import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.format.DefaultFormatter;
 import org.apache.accumulo.core.util.format.Formatter;
 import org.apache.accumulo.core.util.format.FormatterConfig;
@@ -96,7 +95,6 @@
 import org.apache.accumulo.shell.commands.DeleteManyCommand;
 import org.apache.accumulo.shell.commands.DeleteNamespaceCommand;
 import org.apache.accumulo.shell.commands.DeleteRowsCommand;
-import org.apache.accumulo.shell.commands.DeleteScanIterCommand;
 import org.apache.accumulo.shell.commands.DeleteShellIterCommand;
 import org.apache.accumulo.shell.commands.DeleteTableCommand;
 import org.apache.accumulo.shell.commands.DeleteUserCommand;
@@ -149,7 +147,6 @@
 import org.apache.accumulo.shell.commands.SetAuthsCommand;
 import org.apache.accumulo.shell.commands.SetGroupsCommand;
 import org.apache.accumulo.shell.commands.SetIterCommand;
-import org.apache.accumulo.shell.commands.SetScanIterCommand;
 import org.apache.accumulo.shell.commands.SetShellIterCommand;
 import org.apache.accumulo.shell.commands.SleepCommand;
 import org.apache.accumulo.shell.commands.SystemPermissionsCommand;
@@ -371,9 +368,7 @@
         }
       }
 
-      if (!options.isFake()) {
-        DistributedTrace.enable(InetAddress.getLocalHost().getHostName(), "shell", clientConf);
-      }
+      DistributedTrace.enable(InetAddress.getLocalHost().getHostName(), "shell", clientConf);
 
       this.setTableName("");
       connector = instance.getConnector(user, token);
@@ -406,8 +401,8 @@
     Command[] execCommands = {new ExecfileCommand(), new HistoryCommand(), new ExtensionCommand(), new ScriptCommand()};
     Command[] exitCommands = {new ByeCommand(), new ExitCommand(), new QuitCommand()};
     Command[] helpCommands = {new AboutCommand(), new HelpCommand(), new InfoCommand(), new QuestionCommand()};
-    Command[] iteratorCommands = {new DeleteIterCommand(), new DeleteScanIterCommand(), new ListIterCommand(), new SetIterCommand(), new SetScanIterCommand(),
-        new SetShellIterCommand(), new ListShellIterCommand(), new DeleteShellIterCommand()};
+    Command[] iteratorCommands = {new DeleteIterCommand(), new ListIterCommand(), new SetIterCommand(), new SetShellIterCommand(), new ListShellIterCommand(),
+        new DeleteShellIterCommand()};
     Command[] otherCommands = {new HiddenCommand()};
     Command[] permissionsCommands = {new GrantCommand(), new RevokeCommand(), new SystemPermissionsCommand(), new TablePermissionsCommand(),
         new UserPermissionsCommand(), new NamespacePermissionsCommand()};
@@ -451,28 +446,24 @@
   protected void setInstance(ShellOptionsJC options) {
     // should only be one set of instance options set
     instance = null;
-    if (options.isFake()) {
-      instance = DeprecationUtil.makeMockInstance("fake");
+    String instanceName, hosts;
+    if (options.isHdfsZooInstance()) {
+      instanceName = hosts = null;
+    } else if (options.getZooKeeperInstance().size() > 0) {
+      List<String> zkOpts = options.getZooKeeperInstance();
+      instanceName = zkOpts.get(0);
+      hosts = zkOpts.get(1);
     } else {
-      String instanceName, hosts;
-      if (options.isHdfsZooInstance()) {
-        instanceName = hosts = null;
-      } else if (options.getZooKeeperInstance().size() > 0) {
-        List<String> zkOpts = options.getZooKeeperInstance();
-        instanceName = zkOpts.get(0);
-        hosts = zkOpts.get(1);
-      } else {
-        instanceName = options.getZooKeeperInstanceName();
-        hosts = options.getZooKeeperHosts();
-      }
-      final ClientConfiguration clientConf;
-      try {
-        clientConf = options.getClientConfiguration();
-      } catch (ConfigurationException | FileNotFoundException e) {
-        throw new IllegalArgumentException("Unable to load client config from " + options.getClientConfigFile(), e);
-      }
-      instance = getZooInstance(instanceName, hosts, clientConf);
+      instanceName = options.getZooKeeperInstanceName();
+      hosts = options.getZooKeeperHosts();
     }
+    final ClientConfiguration clientConf;
+    try {
+      clientConf = options.getClientConfiguration();
+    } catch (ConfigurationException | FileNotFoundException e) {
+      throw new IllegalArgumentException("Unable to load client config from " + options.getClientConfigFile(), e);
+    }
+    instance = getZooInstance(instanceName, hosts, clientConf);
   }
 
   /**
diff --git a/shell/src/main/java/org/apache/accumulo/shell/ShellOptionsJC.java b/shell/src/main/java/org/apache/accumulo/shell/ShellOptionsJC.java
index 1e5156f..7f476c1 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/ShellOptionsJC.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/ShellOptionsJC.java
@@ -241,10 +241,6 @@
     return debugEnabled;
   }
 
-  public boolean isFake() {
-    return fake;
-  }
-
   public boolean isHelpEnabled() {
     return helpEnabled;
   }
diff --git a/shell/src/main/java/org/apache/accumulo/shell/ShellUtil.java b/shell/src/main/java/org/apache/accumulo/shell/ShellUtil.java
index aa74556..0cdaf18 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/ShellUtil.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/ShellUtil.java
@@ -21,17 +21,17 @@
 import java.io.File;
 import java.io.FileNotFoundException;
 import java.util.ArrayList;
+import java.util.Base64;
 import java.util.List;
 import java.util.Scanner;
 
-import org.apache.accumulo.core.util.Base64;
 import org.apache.hadoop.io.Text;
 
 public class ShellUtil {
 
   /**
    * Scans the given file line-by-line (ignoring empty lines) and returns a list containing those lines. If decode is set to true, every line is decoded using
-   * {@link Base64#decodeBase64(byte[])} from the UTF-8 bytes of that line before inserting in the list.
+   * {@link Base64} from the UTF-8 bytes of that line before inserting in the list.
    *
    * @param filename
    *          Path to the file that needs to be scanned
@@ -49,7 +49,7 @@
       while (file.hasNextLine()) {
         line = file.nextLine();
         if (!line.isEmpty()) {
-          result.add(decode ? new Text(Base64.decodeBase64(line.getBytes(UTF_8))) : new Text(line));
+          result.add(decode ? new Text(Base64.getDecoder().decode(line)) : new Text(line));
         }
       }
     } finally {
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/DeleteScanIterCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/DeleteScanIterCommand.java
deleted file mode 100644
index 7c8cf22..0000000
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/DeleteScanIterCommand.java
+++ /dev/null
@@ -1,103 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.shell.commands;
-
-import java.util.Iterator;
-import java.util.List;
-
-import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.shell.Shell;
-import org.apache.accumulo.shell.Shell.Command;
-import org.apache.commons.cli.CommandLine;
-import org.apache.commons.cli.Option;
-import org.apache.commons.cli.OptionGroup;
-import org.apache.commons.cli.Options;
-
-public class DeleteScanIterCommand extends Command {
-  private Option nameOpt, allOpt;
-
-  @Override
-  public int execute(final String fullCommand, final CommandLine cl, final Shell shellState) throws Exception {
-    Shell.log.warn("Deprecated, use " + new DeleteShellIterCommand().getName());
-    final String tableName = OptUtil.getTableOpt(cl, shellState);
-
-    if (cl.hasOption(allOpt.getOpt())) {
-      final List<IteratorSetting> tableScanIterators = shellState.scanIteratorOptions.remove(tableName);
-      if (tableScanIterators == null) {
-        Shell.log.info("No scan iterators set on table " + tableName);
-      } else {
-        Shell.log.info("Removed the following scan iterators from table " + tableName + ":" + tableScanIterators);
-      }
-    } else if (cl.hasOption(nameOpt.getOpt())) {
-      final String name = cl.getOptionValue(nameOpt.getOpt());
-      final List<IteratorSetting> tableScanIterators = shellState.scanIteratorOptions.get(tableName);
-      if (tableScanIterators != null) {
-        boolean found = false;
-        for (Iterator<IteratorSetting> iter = tableScanIterators.iterator(); iter.hasNext();) {
-          if (iter.next().getName().equals(name)) {
-            iter.remove();
-            found = true;
-            break;
-          }
-        }
-        if (!found) {
-          Shell.log.info("No iterator named " + name + " found for table " + tableName);
-        } else {
-          Shell.log.info("Removed scan iterator " + name + " from table " + tableName + " (" + shellState.scanIteratorOptions.get(tableName).size() + " left)");
-          if (shellState.scanIteratorOptions.get(tableName).size() == 0) {
-            shellState.scanIteratorOptions.remove(tableName);
-          }
-        }
-      } else {
-        Shell.log.info("No iterator named " + name + " found for table " + tableName);
-      }
-    }
-
-    return 0;
-  }
-
-  @Override
-  public String description() {
-    return "deletes a table-specific scan iterator so it is no longer used during this shell session";
-  }
-
-  @Override
-  public Options getOptions() {
-    final Options o = new Options();
-
-    OptionGroup nameGroup = new OptionGroup();
-
-    nameOpt = new Option("n", "name", true, "iterator to delete");
-    nameOpt.setArgName("itername");
-
-    allOpt = new Option("a", "all", false, "delete all scan iterators");
-    allOpt.setArgName("all");
-
-    nameGroup.addOption(nameOpt);
-    nameGroup.addOption(allOpt);
-    nameGroup.setRequired(true);
-    o.addOptionGroup(nameGroup);
-    o.addOption(OptUtil.tableOpt("table to delete scan iterators from"));
-
-    return o;
-  }
-
-  @Override
-  public int numArgs() {
-    return 0;
-  }
-}
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/FateCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/FateCommand.java
index 88eeefd..d43d4e3 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/FateCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/FateCommand.java
@@ -16,10 +16,12 @@
  */
 package org.apache.accumulo.shell.commands;
 
+import static java.nio.charset.StandardCharsets.UTF_8;
+
 import java.io.IOException;
 import java.lang.reflect.Type;
-import java.nio.charset.StandardCharsets;
 import java.util.ArrayList;
+import java.util.Base64;
 import java.util.Collections;
 import java.util.EnumSet;
 import java.util.Formatter;
@@ -33,7 +35,6 @@
 import org.apache.accumulo.core.conf.DefaultConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.conf.SiteConfiguration;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.AdminUtil;
 import org.apache.accumulo.fate.ReadOnlyRepo;
@@ -84,8 +85,8 @@
     public String asBase64;
 
     ByteArrayContainer(byte[] ba) {
-      asUtf8 = new String(ba, StandardCharsets.UTF_8);
-      asBase64 = Base64.encodeBase64URLSafeString(ba);
+      asUtf8 = new String(ba, UTF_8);
+      asBase64 = Base64.getUrlEncoder().encodeToString(ba);
     }
   }
 
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/GetSplitsCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/GetSplitsCommand.java
index 17b7db4..cae7af2 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/GetSplitsCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/GetSplitsCommand.java
@@ -19,6 +19,7 @@
 import java.io.IOException;
 import java.security.MessageDigest;
 import java.security.NoSuchAlgorithmException;
+import java.util.Base64;
 import java.util.Iterator;
 import java.util.Map.Entry;
 
@@ -34,7 +35,6 @@
 import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.util.TextUtil;
 import org.apache.accumulo.core.util.format.DefaultFormatter;
 import org.apache.accumulo.shell.Shell;
@@ -103,7 +103,7 @@
       return null;
     }
     final int length = text.getLength();
-    return encode ? Base64.encodeBase64String(TextUtil.getBytes(text)) : DefaultFormatter.appendText(new StringBuilder(), text, length).toString();
+    return encode ? Base64.getEncoder().encodeToString(TextUtil.getBytes(text)) : DefaultFormatter.appendText(new StringBuilder(), text, length).toString();
   }
 
   private static String obscuredTabletName(final KeyExtent extent) {
@@ -116,7 +116,7 @@
     if (extent.getEndRow() != null && extent.getEndRow().getLength() > 0) {
       digester.update(extent.getEndRow().getBytes(), 0, extent.getEndRow().getLength());
     }
-    return Base64.encodeBase64String(digester.digest());
+    return Base64.getEncoder().encodeToString(digester.digest());
   }
 
   @Override
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/HiddenCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/HiddenCommand.java
index 14d6c2c..4098f67 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/HiddenCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/HiddenCommand.java
@@ -19,9 +19,9 @@
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.security.SecureRandom;
+import java.util.Base64;
 import java.util.Random;
 
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.shell.Shell;
 import org.apache.accumulo.shell.Shell.Command;
 import org.apache.accumulo.shell.ShellCommandException;
@@ -42,9 +42,10 @@
       shellState.getReader().beep();
       shellState.getReader().println();
       shellState.getReader().println(
-          new String(Base64.decodeBase64(("ICAgICAgIC4tLS4KICAgICAgLyAvXCBcCiAgICAgKCAvLS1cICkKICAgICAuPl8gIF88LgogICAgLyB8ICd8ICcgXAog"
-              + "ICAvICB8Xy58Xy4gIFwKICAvIC98ICAgICAgfFwgXAogfCB8IHwgfFwvfCB8IHwgfAogfF98IHwgfCAgfCB8IHxffAogICAgIC8gIF9fICBcCiAgICAvICAv"
-              + "ICBcICBcCiAgIC8gIC8gICAgXCAgXF8KIHwvICAvICAgICAgXCB8IHwKIHxfXy8gICAgICAgIFx8X3wK").getBytes(UTF_8)), UTF_8));
+          new String(Base64.getDecoder().decode(
+              "ICAgICAgIC4tLS4KICAgICAgLyAvXCBcCiAgICAgKCAvLS1cICkKICAgICAuPl8gIF88LgogICAgLyB8ICd8ICcgXAog"
+                  + "ICAvICB8Xy58Xy4gIFwKICAvIC98ICAgICAgfFwgXAogfCB8IHwgfFwvfCB8IHwgfAogfF98IHwgfCAgfCB8IHxffAogICAgIC8gIF9fICBcCiAgICAvICAv"
+                  + "ICBcICBcCiAgIC8gIC8gICAgXCAgXF8KIHwvICAvICAgICAgXCB8IHwKIHxfXy8gICAgICAgIFx8X3wK"), UTF_8));
     } else {
       throw new ShellCommandException(ErrorCode.UNRECOGNIZED_COMMAND, getName());
     }
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/HistoryCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/HistoryCommand.java
index 785b49e..2caf69a 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/HistoryCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/HistoryCommand.java
@@ -19,17 +19,16 @@
 import java.io.IOException;
 import java.util.Iterator;
 
-import jline.console.history.History.Entry;
-
 import org.apache.accumulo.shell.Shell;
 import org.apache.accumulo.shell.Shell.Command;
 import org.apache.commons.cli.CommandLine;
 import org.apache.commons.cli.Option;
 import org.apache.commons.cli.Options;
 
-import com.google.common.base.Function;
 import com.google.common.collect.Iterators;
 
+import jline.console.history.History.Entry;
+
 public class HistoryCommand extends Command {
   private Option clearHist;
   private Option disablePaginationOpt;
@@ -40,13 +39,7 @@
       shellState.getReader().getHistory().clear();
     } else {
       Iterator<Entry> source = shellState.getReader().getHistory().entries();
-      Iterator<String> historyIterator = Iterators.transform(source, new Function<Entry,String>() {
-        @Override
-        public String apply(Entry input) {
-          return String.format("%d: %s", input.index() + 1, input.value());
-        }
-      });
-
+      Iterator<String> historyIterator = Iterators.transform(source, input -> String.format("%d: %s", input.index() + 1, input.value()));
       shellState.printLines(historyIterator, !cl.hasOption(disablePaginationOpt.getOpt()));
     }
 
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/NamespacesCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/NamespacesCommand.java
index cb37505..a7ae6a7 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/NamespacesCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/NamespacesCommand.java
@@ -19,7 +19,6 @@
 import java.io.IOException;
 import java.util.Iterator;
 import java.util.Map;
-import java.util.Map.Entry;
 import java.util.TreeMap;
 
 import org.apache.accumulo.core.client.AccumuloException;
@@ -31,7 +30,6 @@
 import org.apache.commons.cli.Option;
 import org.apache.commons.cli.Options;
 
-import com.google.common.base.Function;
 import com.google.common.collect.Iterators;
 
 public class NamespacesCommand extends Command {
@@ -43,18 +41,15 @@
   public int execute(final String fullCommand, final CommandLine cl, final Shell shellState) throws AccumuloException, AccumuloSecurityException, IOException {
     Map<String,String> namespaces = new TreeMap<>(shellState.getConnector().namespaceOperations().namespaceIdMap());
 
-    Iterator<String> it = Iterators.transform(namespaces.entrySet().iterator(), new Function<Entry<String,String>,String>() {
-      @Override
-      public String apply(Map.Entry<String,String> entry) {
-        String name = entry.getKey();
-        if (Namespaces.DEFAULT_NAMESPACE.equals(name))
-          name = DEFAULT_NAMESPACE_DISPLAY_NAME;
-        String id = entry.getValue();
-        if (cl.hasOption(namespaceIdOption.getOpt()))
-          return String.format(TablesCommand.NAME_AND_ID_FORMAT, name, id);
-        else
-          return name;
-      }
+    Iterator<String> it = Iterators.transform(namespaces.entrySet().iterator(), entry -> {
+      String name = entry.getKey();
+      if (Namespaces.DEFAULT_NAMESPACE.equals(name))
+        name = DEFAULT_NAMESPACE_DISPLAY_NAME;
+      String id = entry.getValue();
+      if (cl.hasOption(namespaceIdOption.getOpt()))
+        return String.format(TablesCommand.NAME_AND_ID_FORMAT, name, id);
+      else
+        return name;
     });
 
     shellState.printLines(it, !cl.hasOption(disablePaginationOpt.getOpt()));
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java
index fffdf21..6e89298 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java
@@ -53,7 +53,7 @@
 public class SetIterCommand extends Command {
 
   private Option allScopeOpt, mincScopeOpt, majcScopeOpt, scanScopeOpt, nameOpt, priorityOpt;
-  private Option aggTypeOpt, ageoffTypeOpt, regexTypeOpt, versionTypeOpt, reqvisTypeOpt, classnameTypeOpt;
+  private Option ageoffTypeOpt, regexTypeOpt, versionTypeOpt, reqvisTypeOpt, classnameTypeOpt;
 
   @Override
   public int execute(final String fullCommand, final CommandLine cl, final Shell shellState) throws AccumuloException, AccumuloSecurityException,
@@ -66,12 +66,7 @@
 
     final Map<String,String> options = new HashMap<>();
     String classname = cl.getOptionValue(classnameTypeOpt.getOpt());
-    if (cl.hasOption(aggTypeOpt.getOpt())) {
-      Shell.log.warn("aggregators are deprecated");
-      @SuppressWarnings("deprecation")
-      String deprecatedClassName = org.apache.accumulo.core.iterators.AggregatingIterator.class.getName();
-      classname = deprecatedClassName;
-    } else if (cl.hasOption(regexTypeOpt.getOpt())) {
+    if (cl.hasOption(regexTypeOpt.getOpt())) {
       classname = RegExFilter.class.getName();
     } else if (cl.hasOption(ageoffTypeOpt.getOpt())) {
       classname = AgeOffFilter.class.getName();
@@ -119,14 +114,6 @@
 
     ScanCommand.ensureTserversCanLoadIterator(shellState, tableName, classname);
 
-    final String aggregatorClass = options.get("aggregatorClass");
-    @SuppressWarnings("deprecation")
-    String deprecatedAggregatorClassName = org.apache.accumulo.core.iterators.aggregation.Aggregator.class.getName();
-    if (aggregatorClass != null && !shellState.getConnector().tableOperations().testClassLoad(tableName, aggregatorClass, deprecatedAggregatorClassName)) {
-      throw new ShellCommandException(ErrorCode.INITIALIZATION_FAILURE, "Servers are unable to load " + aggregatorClass + " as type "
-          + deprecatedAggregatorClassName);
-    }
-
     for (Iterator<Entry<String,String>> i = options.entrySet().iterator(); i.hasNext();) {
       final Entry<String,String> entry = i.next();
       if (entry.getValue() == null || entry.getValue().isEmpty()) {
@@ -161,14 +148,6 @@
           + SortedKeyValueIterator.class.getName());
     }
 
-    final String aggregatorClass = options.get("aggregatorClass");
-    @SuppressWarnings("deprecation")
-    String deprecatedAggregatorClassName = org.apache.accumulo.core.iterators.aggregation.Aggregator.class.getName();
-    if (aggregatorClass != null && !shellState.getConnector().namespaceOperations().testClassLoad(namespace, aggregatorClass, deprecatedAggregatorClassName)) {
-      throw new ShellCommandException(ErrorCode.INITIALIZATION_FAILURE, "Servers are unable to load " + aggregatorClass + " as type "
-          + deprecatedAggregatorClassName);
-    }
-
     for (Iterator<Entry<String,String>> i = options.entrySet().iterator(); i.hasNext();) {
       final Entry<String,String> entry = i.next();
       if (entry.getValue() == null || entry.getValue().isEmpty()) {
@@ -357,14 +336,12 @@
     final OptionGroup typeGroup = new OptionGroup();
     classnameTypeOpt = new Option("class", "class-name", true, "a java class that implements SortedKeyValueIterator");
     classnameTypeOpt.setArgName("name");
-    aggTypeOpt = new Option("agg", "aggregator", false, "an aggregating type");
     regexTypeOpt = new Option("regex", "regular-expression", false, "a regex matching iterator");
     versionTypeOpt = new Option("vers", "version", false, "a versioning iterator");
     reqvisTypeOpt = new Option("reqvis", "require-visibility", false, "an iterator that omits entries with empty visibilities");
     ageoffTypeOpt = new Option("ageoff", "ageoff", false, "an aging off iterator");
 
     typeGroup.addOption(classnameTypeOpt);
-    typeGroup.addOption(aggTypeOpt);
     typeGroup.addOption(regexTypeOpt);
     typeGroup.addOption(versionTypeOpt);
     typeGroup.addOption(reqvisTypeOpt);
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/SetScanIterCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/SetScanIterCommand.java
deleted file mode 100644
index 2399d0e..0000000
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/SetScanIterCommand.java
+++ /dev/null
@@ -1,113 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.shell.commands;
-
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-import java.util.Map.Entry;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.shell.Shell;
-import org.apache.accumulo.shell.ShellCommandException;
-import org.apache.commons.cli.CommandLine;
-import org.apache.commons.cli.Option;
-import org.apache.commons.cli.OptionGroup;
-import org.apache.commons.cli.Options;
-
-public class SetScanIterCommand extends SetIterCommand {
-  @Override
-  public int execute(final String fullCommand, final CommandLine cl, final Shell shellState) throws AccumuloException, AccumuloSecurityException,
-      TableNotFoundException, IOException, ShellCommandException {
-    Shell.log.warn("Deprecated, use " + new SetShellIterCommand().getName());
-    return super.execute(fullCommand, cl, shellState);
-  }
-
-  @Override
-  protected void setTableProperties(final CommandLine cl, final Shell shellState, final int priority, final Map<String,String> options, final String classname,
-      final String name) throws AccumuloException, AccumuloSecurityException, ShellCommandException, TableNotFoundException {
-
-    final String tableName = OptUtil.getTableOpt(cl, shellState);
-
-    ScanCommand.ensureTserversCanLoadIterator(shellState, tableName, classname);
-
-    for (Iterator<Entry<String,String>> i = options.entrySet().iterator(); i.hasNext();) {
-      final Entry<String,String> entry = i.next();
-      if (entry.getValue() == null || entry.getValue().isEmpty()) {
-        i.remove();
-      }
-    }
-
-    List<IteratorSetting> tableScanIterators = shellState.scanIteratorOptions.get(tableName);
-    if (tableScanIterators == null) {
-      tableScanIterators = new ArrayList<>();
-      shellState.scanIteratorOptions.put(tableName, tableScanIterators);
-    }
-    final IteratorSetting setting = new IteratorSetting(priority, name, classname);
-    setting.addOptions(options);
-
-    // initialize a scanner to ensure the new setting does not conflict with existing settings
-    final String user = shellState.getConnector().whoami();
-    final Authorizations auths = shellState.getConnector().securityOperations().getUserAuthorizations(user);
-    final Scanner scanner = shellState.getConnector().createScanner(tableName, auths);
-    for (IteratorSetting s : tableScanIterators) {
-      scanner.addScanIterator(s);
-    }
-    scanner.addScanIterator(setting);
-
-    // if no exception has been thrown, it's safe to add it to the list
-    tableScanIterators.add(setting);
-    Shell.log.debug("Scan iterators :" + shellState.scanIteratorOptions.get(tableName));
-  }
-
-  @Override
-  public String description() {
-    return "sets a table-specific scan iterator for this shell session";
-  }
-
-  @Override
-  public Options getOptions() {
-    // Remove the options that specify which type of iterator this is, since
-    // they are all scan iterators with this command.
-    final HashSet<OptionGroup> groups = new HashSet<>();
-    final Options parentOptions = super.getOptions();
-    final Options modifiedOptions = new Options();
-    for (Iterator<?> it = parentOptions.getOptions().iterator(); it.hasNext();) {
-      Option o = (Option) it.next();
-      if (!IteratorScope.majc.name().equals(o.getOpt()) && !IteratorScope.minc.name().equals(o.getOpt()) && !IteratorScope.scan.name().equals(o.getOpt())) {
-        modifiedOptions.addOption(o);
-        OptionGroup group = parentOptions.getOptionGroup(o);
-        if (group != null)
-          groups.add(group);
-      }
-    }
-    for (OptionGroup group : groups) {
-      modifiedOptions.addOptionGroup(group);
-    }
-    return modifiedOptions;
-  }
-
-}
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/TablesCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/TablesCommand.java
index 397b450..8bf96e7 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/TablesCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/TablesCommand.java
@@ -19,7 +19,6 @@
 import java.io.IOException;
 import java.util.Iterator;
 import java.util.Map;
-import java.util.Map.Entry;
 import java.util.TreeMap;
 
 import org.apache.accumulo.core.client.AccumuloException;
@@ -33,8 +32,6 @@
 import org.apache.commons.cli.Options;
 import org.apache.commons.collections.MapUtils;
 
-import com.google.common.base.Function;
-import com.google.common.base.Predicate;
 import com.google.common.collect.Iterators;
 import com.google.common.collect.Maps;
 
@@ -54,28 +51,20 @@
     Map<String,String> tables = shellState.getConnector().tableOperations().tableIdMap();
 
     // filter only specified namespace
-    tables = Maps.filterKeys(tables, new Predicate<String>() {
-      @Override
-      public boolean apply(String tableName) {
-        return namespace == null || Tables.qualify(tableName).getFirst().equals(namespace);
-      }
-    });
+    tables = Maps.filterKeys(tables, tableName -> namespace == null || Tables.qualify(tableName).getFirst().equals(namespace));
 
     final boolean sortByTableId = cl.hasOption(sortByTableIdOption.getOpt());
-    tables = new TreeMap<>((sortByTableId ? MapUtils.invertMap(tables) : tables));
+    tables = new TreeMap<String,String>((sortByTableId ? MapUtils.invertMap(tables) : tables));
 
-    Iterator<String> it = Iterators.transform(tables.entrySet().iterator(), new Function<Entry<String,String>,String>() {
-      @Override
-      public String apply(Map.Entry<String,String> entry) {
-        String tableName = String.valueOf(sortByTableId ? entry.getValue() : entry.getKey());
-        String tableId = String.valueOf(sortByTableId ? entry.getKey() : entry.getValue());
-        if (namespace != null)
-          tableName = Tables.qualify(tableName).getSecond();
-        if (cl.hasOption(tableIdOption.getOpt()))
-          return String.format(NAME_AND_ID_FORMAT, tableName, tableId);
-        else
-          return tableName;
-      }
+    Iterator<String> it = Iterators.transform(tables.entrySet().iterator(), entry -> {
+      String tableName = String.valueOf(sortByTableId ? entry.getValue() : entry.getKey());
+      String tableId = String.valueOf(sortByTableId ? entry.getKey() : entry.getValue());
+      if (namespace != null)
+        tableName = Tables.qualify(tableName).getSecond();
+      if (cl.hasOption(tableIdOption.getOpt()))
+        return String.format(NAME_AND_ID_FORMAT, tableName, tableId);
+      else
+        return tableName;
     });
 
     shellState.printLines(it, !cl.hasOption(disablePaginationOpt.getOpt()));
diff --git a/shell/src/main/java/org/apache/accumulo/shell/mock/MockShell.java b/shell/src/main/java/org/apache/accumulo/shell/mock/MockShell.java
deleted file mode 100644
index ebc92f7..0000000
--- a/shell/src/main/java/org/apache/accumulo/shell/mock/MockShell.java
+++ /dev/null
@@ -1,159 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.shell.mock;
-
-import static java.nio.charset.StandardCharsets.UTF_8;
-
-import java.io.ByteArrayInputStream;
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.OutputStream;
-
-import jline.console.ConsoleReader;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.shell.Shell;
-import org.apache.accumulo.shell.ShellOptionsJC;
-import org.apache.commons.cli.CommandLine;
-import org.apache.commons.vfs2.FileSystemException;
-
-/**
- * An Accumulo Shell implementation that allows a developer to attach an InputStream and Writer to the Shell for testing purposes.
- *
- * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
- */
-@Deprecated
-public class MockShell extends Shell {
-  private static final String NEWLINE = "\n";
-
-  protected InputStream in;
-  protected OutputStream out;
-
-  public MockShell(InputStream in, OutputStream out) throws IOException {
-    super();
-    this.in = in;
-    this.out = out;
-  }
-
-  @Override
-  public boolean config(String... args) throws IOException {
-    // If configuring the shell failed, fail quickly
-    if (!super.config(args)) {
-      return false;
-    }
-
-    // Update the ConsoleReader with the input and output "redirected"
-    try {
-      this.reader = new ConsoleReader(in, out);
-    } catch (Exception e) {
-      printException(e);
-      return false;
-    }
-
-    // Don't need this for testing purposes
-    this.reader.setHistoryEnabled(false);
-    this.reader.setPaginationEnabled(false);
-
-    // Make the parsing from the client easier;
-    this.verbose = false;
-    return true;
-  }
-
-  @Override
-  protected void setInstance(ShellOptionsJC options) {
-    // We always want a MockInstance for this test
-    instance = new org.apache.accumulo.core.client.mock.MockInstance();
-  }
-
-  @Override
-  public int start() throws IOException {
-    String input;
-    if (isVerbose())
-      printInfo();
-
-    if (execFile != null) {
-      java.util.Scanner scanner = new java.util.Scanner(execFile, UTF_8.name());
-      try {
-        while (scanner.hasNextLine() && !hasExited()) {
-          execCommand(scanner.nextLine(), true, isVerbose());
-        }
-      } finally {
-        scanner.close();
-      }
-    } else if (execCommand != null) {
-      for (String command : execCommand.split("\n")) {
-        execCommand(command, true, isVerbose());
-      }
-      return exitCode;
-    }
-
-    while (true) {
-      if (hasExited())
-        return exitCode;
-
-      reader.setPrompt(getDefaultPrompt());
-      input = reader.readLine();
-      if (input == null) {
-        reader.println();
-        return exitCode;
-      } // user canceled
-
-      execCommand(input, false, false);
-    }
-  }
-
-  /**
-   * @param in
-   *          the in to set
-   */
-  public void setConsoleInputStream(InputStream in) {
-    this.in = in;
-  }
-
-  /**
-   * @param out
-   *          the output stream to set
-   */
-  public void setConsoleWriter(OutputStream out) {
-    this.out = out;
-  }
-
-  @Override
-  public ClassLoader getClassLoader(final CommandLine cl, final Shell shellState) throws AccumuloException, TableNotFoundException, AccumuloSecurityException,
-      IOException, FileSystemException {
-    return MockShell.class.getClassLoader();
-  }
-
-  /**
-   * Convenience method to create the byte-array to hand to the console
-   *
-   * @param commands
-   *          An array of commands to run
-   * @return A byte[] input stream which can be handed to the console.
-   */
-  public static ByteArrayInputStream makeCommands(String... commands) {
-    StringBuilder sb = new StringBuilder(commands.length * 8);
-
-    for (String command : commands) {
-      sb.append(command).append(NEWLINE);
-    }
-
-    return new ByteArrayInputStream(sb.toString().getBytes(UTF_8));
-  }
-}
diff --git a/shell/src/test/java/org/apache/accumulo/shell/ShellConfigTest.java b/shell/src/test/java/org/apache/accumulo/shell/ShellConfigTest.java
index 8bef14d..5e8736c 100644
--- a/shell/src/test/java/org/apache/accumulo/shell/ShellConfigTest.java
+++ b/shell/src/test/java/org/apache/accumulo/shell/ShellConfigTest.java
@@ -24,29 +24,48 @@
 import java.io.FileDescriptor;
 import java.io.FileInputStream;
 import java.io.IOException;
+import java.io.OutputStream;
 import java.io.PrintStream;
 import java.nio.file.Files;
 import java.util.HashMap;
 import java.util.Map;
 
-import jline.console.ConsoleReader;
-
 import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.ConfigurationCopy;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.shell.ShellTest.TestOutputStream;
 import org.apache.log4j.Level;
 import org.junit.After;
 import org.junit.Before;
+import org.junit.Ignore;
 import org.junit.Test;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import com.beust.jcommander.ParameterException;
 
+import jline.console.ConsoleReader;
+
 public class ShellConfigTest {
+
+  public static class TestOutputStream extends OutputStream {
+    StringBuilder sb = new StringBuilder();
+
+    @Override
+    public void write(int b) throws IOException {
+      sb.append((char) (0xff & b));
+    }
+
+    public String get() {
+      return sb.toString();
+    }
+
+    public void clear() {
+      sb.setLength(0);
+    }
+  }
+
   TestOutputStream output;
   Shell shell;
   PrintStream out;
@@ -102,17 +121,20 @@
     assertTrue("Did not print usage", output.get().startsWith("Usage"));
   }
 
+  @Ignore
   @Test
   public void testTokenWithoutOptions() throws IOException {
     assertFalse(shell.config(args("--fake", "-tc", PasswordToken.class.getName())));
     assertFalse(output.get().contains(ParameterException.class.getName()));
   }
 
+  @Ignore
   @Test
   public void testTokenAndOption() throws IOException {
     assertTrue(shell.config(args("--fake", "-tc", PasswordToken.class.getName(), "-u", "foo", "-l", "password=foo")));
   }
 
+  @Ignore
   @Test
   public void testTokenAndOptionAndPassword() throws IOException {
     assertFalse(shell.config(args("--fake", "-tc", PasswordToken.class.getName(), "-l", "password=foo", "-p", "bar")));
diff --git a/shell/src/test/java/org/apache/accumulo/shell/ShellSetInstanceTest.java b/shell/src/test/java/org/apache/accumulo/shell/ShellSetInstanceTest.java
index 428481a..6dae0b4 100644
--- a/shell/src/test/java/org/apache/accumulo/shell/ShellSetInstanceTest.java
+++ b/shell/src/test/java/org/apache/accumulo/shell/ShellSetInstanceTest.java
@@ -34,8 +34,6 @@
 import java.util.List;
 import java.util.UUID;
 
-import jline.console.ConsoleReader;
-
 import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 import org.apache.accumulo.core.client.ZooKeeperInstance;
@@ -49,7 +47,6 @@
 import org.easymock.EasyMock;
 import org.junit.After;
 import org.junit.AfterClass;
-import org.junit.Assert;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.BeforeClass;
@@ -59,10 +56,17 @@
 import org.powermock.core.classloader.annotations.PrepareForTest;
 import org.powermock.modules.junit4.PowerMockRunner;
 
+import jline.console.ConsoleReader;
+
 @RunWith(PowerMockRunner.class)
 @PowerMockIgnore("javax.security.*")
 @PrepareForTest({Shell.class, ZooUtil.class, ConfigSanityCheck.class})
 public class ShellSetInstanceTest {
+  @SuppressWarnings("deprecation")
+  private static Property INSTANCE_DFS_DIR = Property.INSTANCE_DFS_DIR;
+  @SuppressWarnings("deprecation")
+  private static Property INSTANCE_DFS_URI = Property.INSTANCE_DFS_URI;
+
   public static class TestOutputStream extends OutputStream {
     StringBuilder sb = new StringBuilder();
 
@@ -121,17 +125,6 @@
     SiteConfiguration.clearInstance();
   }
 
-  @Deprecated
-  @Test
-  public void testSetInstance_Fake() throws Exception {
-    ShellOptionsJC opts = createMock(ShellOptionsJC.class);
-    expect(opts.isFake()).andReturn(true);
-    replay(opts);
-
-    shell.setInstance(opts);
-    Assert.assertTrue(shell.getInstance() instanceof org.apache.accumulo.core.client.mock.MockInstance);
-  }
-
   @Test
   public void testSetInstance_HdfsZooInstance_Explicit() throws Exception {
     testSetInstance_HdfsZooInstance(true, false, false);
@@ -155,7 +148,6 @@
   private void testSetInstance_HdfsZooInstance(boolean explicitHdfs, boolean onlyInstance, boolean onlyHosts) throws Exception {
     ClientConfiguration clientConf = createMock(ClientConfiguration.class);
     ShellOptionsJC opts = createMock(ShellOptionsJC.class);
-    expect(opts.isFake()).andReturn(false);
     expect(opts.getClientConfiguration()).andReturn(clientConf);
     expect(opts.isHdfsZooInstance()).andReturn(explicitHdfs);
     if (!explicitHdfs) {
@@ -191,14 +183,10 @@
     }
     if (!onlyInstance) {
       expect(clientConf.containsKey(Property.INSTANCE_VOLUMES.getKey())).andReturn(false).atLeastOnce();
-      @SuppressWarnings("deprecation")
-      String INSTANCE_DFS_DIR_KEY = Property.INSTANCE_DFS_DIR.getKey();
-      @SuppressWarnings("deprecation")
-      String INSTANCE_DFS_URI_KEY = Property.INSTANCE_DFS_URI.getKey();
-      expect(clientConf.containsKey(INSTANCE_DFS_DIR_KEY)).andReturn(true).atLeastOnce();
-      expect(clientConf.containsKey(INSTANCE_DFS_URI_KEY)).andReturn(true).atLeastOnce();
-      expect(clientConf.getString(INSTANCE_DFS_URI_KEY)).andReturn("hdfs://nn1").atLeastOnce();
-      expect(clientConf.getString(INSTANCE_DFS_DIR_KEY)).andReturn("/dfs").atLeastOnce();
+      expect(clientConf.containsKey(INSTANCE_DFS_DIR.getKey())).andReturn(true).atLeastOnce();
+      expect(clientConf.containsKey(INSTANCE_DFS_URI.getKey())).andReturn(true).atLeastOnce();
+      expect(clientConf.getString(INSTANCE_DFS_URI.getKey())).andReturn("hdfs://nn1").atLeastOnce();
+      expect(clientConf.getString(INSTANCE_DFS_DIR.getKey())).andReturn("/dfs").atLeastOnce();
     }
 
     UUID randomUUID = null;
@@ -233,7 +221,6 @@
   private void testSetInstance_ZKInstance(boolean dashZ) throws Exception {
     ClientConfiguration clientConf = createMock(ClientConfiguration.class);
     ShellOptionsJC opts = createMock(ShellOptionsJC.class);
-    expect(opts.isFake()).andReturn(false);
     expect(opts.getClientConfiguration()).andReturn(clientConf);
     expect(opts.isHdfsZooInstance()).andReturn(false);
     expect(clientConf.getKeys()).andReturn(Arrays.asList(ClientProperty.INSTANCE_NAME.getKey(), ClientProperty.INSTANCE_ZK_HOST.getKey()).iterator());
diff --git a/shell/src/test/java/org/apache/accumulo/shell/ShellTest.java b/shell/src/test/java/org/apache/accumulo/shell/ShellTest.java
deleted file mode 100644
index dc902ce..0000000
--- a/shell/src/test/java/org/apache/accumulo/shell/ShellTest.java
+++ /dev/null
@@ -1,394 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.shell;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-
-import java.io.File;
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.OutputStream;
-import java.nio.file.Files;
-import java.text.DateFormat;
-import java.text.SimpleDateFormat;
-import java.util.Arrays;
-import java.util.Date;
-import java.util.List;
-import java.util.TimeZone;
-
-import org.apache.log4j.Level;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import jline.console.ConsoleReader;
-
-public class ShellTest {
-  private static final Logger log = LoggerFactory.getLogger(ShellTest.class);
-
-  public static class TestOutputStream extends OutputStream {
-    StringBuilder sb = new StringBuilder();
-
-    @Override
-    public void write(int b) throws IOException {
-      sb.append((char) (0xff & b));
-    }
-
-    public String get() {
-      return sb.toString();
-    }
-
-    public void clear() {
-      sb.setLength(0);
-    }
-  }
-
-  public static class StringInputStream extends InputStream {
-    private String source = "";
-    private int offset = 0;
-
-    @Override
-    public int read() throws IOException {
-      if (offset == source.length())
-        return '\n';
-      else
-        return source.charAt(offset++);
-    }
-
-    public void set(String other) {
-      source = other;
-      offset = 0;
-    }
-  }
-
-  private StringInputStream input;
-  private TestOutputStream output;
-  private Shell shell;
-  private File config;
-
-  void execExpectList(String cmd, boolean expecteGoodExit, List<String> expectedStrings) throws IOException {
-    exec(cmd);
-    if (expecteGoodExit) {
-      assertGoodExit("", true);
-    } else {
-      assertBadExit("", true);
-    }
-
-    for (String expectedString : expectedStrings) {
-      assertTrue(expectedString + " was not present in " + output.get(), output.get().contains(expectedString));
-    }
-  }
-
-  void exec(String cmd) throws IOException {
-    output.clear();
-    shell.execCommand(cmd, true, true);
-  }
-
-  void exec(String cmd, boolean expectGoodExit) throws IOException {
-    exec(cmd);
-    if (expectGoodExit)
-      assertGoodExit("", true);
-    else
-      assertBadExit("", true);
-  }
-
-  void exec(String cmd, boolean expectGoodExit, String expectString) throws IOException {
-    exec(cmd, expectGoodExit, expectString, true);
-  }
-
-  void exec(String cmd, boolean expectGoodExit, String expectString, boolean stringPresent) throws IOException {
-    exec(cmd);
-    if (expectGoodExit)
-      assertGoodExit(expectString, stringPresent);
-    else
-      assertBadExit(expectString, stringPresent);
-  }
-
-  @Before
-  public void setup() throws IOException {
-    TimeZone.setDefault(TimeZone.getTimeZone("UTC"));
-    Shell.log.setLevel(Level.OFF);
-    output = new TestOutputStream();
-    input = new StringInputStream();
-    config = Files.createTempFile(null, null).toFile();
-    shell = new Shell(new ConsoleReader(input, output));
-    shell.setLogErrorsToConsole();
-    shell.config("--config-file", config.toString(), "--fake", "-u", "test", "-p", "secret");
-  }
-
-  @After
-  public void teardown() {
-    if (config.exists()) {
-      if (!config.delete()) {
-        log.error("Unable to delete {}", config);
-      }
-    }
-    shell.shutdown();
-  }
-
-  void assertGoodExit(String s, boolean stringPresent) {
-    Shell.log.debug(output.get());
-    assertEquals(shell.getExitCode(), 0);
-    if (s.length() > 0)
-      assertEquals(s + " present in " + output.get() + " was not " + stringPresent, stringPresent, output.get().contains(s));
-  }
-
-  void assertBadExit(String s, boolean stringPresent) {
-    Shell.log.debug(output.get());
-    assertTrue(shell.getExitCode() > 0);
-    if (s.length() > 0)
-      assertEquals(s + " present in " + output.get() + " was not " + stringPresent, stringPresent, output.get().contains(s));
-    shell.resetExitCode();
-  }
-
-  @Test
-  public void aboutTest() throws IOException {
-    Shell.log.debug("Starting about test -----------------------------------");
-    exec("about", true, "Shell - Apache Accumulo Interactive Shell");
-    exec("about -v", true, "Current user:");
-    exec("about arg", false, "java.lang.IllegalArgumentException: Expected 0 arguments");
-  }
-
-  @Test
-  public void addGetSplitsTest() throws IOException {
-    Shell.log.debug("Starting addGetSplits test ----------------------------");
-    exec("addsplits arg", false, "java.lang.IllegalStateException: Not in a table context");
-    exec("createtable test", true);
-    exec("addsplits 1 \\x80", true);
-    exec("getsplits", true, "1\n\\x80");
-    exec("getsplits -m 1", true, "1");
-    exec("getsplits -b64", true, "MQ==\ngA==");
-    exec("deletetable test -f", true, "Table: [test] has been deleted");
-  }
-
-  @Test
-  public void insertDeleteScanTest() throws IOException {
-    Shell.log.debug("Starting insertDeleteScan test ------------------------");
-    exec("insert r f q v", false, "java.lang.IllegalStateException: Not in a table context");
-    exec("delete r f q", false, "java.lang.IllegalStateException: Not in a table context");
-    exec("createtable test", true);
-    exec("insert r f q v", true);
-    exec("scan", true, "r f:q []    v");
-    exec("delete r f q", true);
-    exec("scan", true, "r f:q []    v", false);
-    exec("insert \\x90 \\xa0 \\xb0 \\xc0\\xd0\\xe0\\xf0", true);
-    exec("scan", true, "\\x90 \\xA0:\\xB0 []    \\xC0\\xD0");
-    exec("scan -f 2", true, "\\x90 \\xA0:\\xB0 []    \\xC0\\xD0");
-    exec("scan -f 2", true, "\\x90 \\xA0:\\xB0 []    \\xC0\\xD0\\xE0", false);
-    exec("scan -b \\x90 -e \\x90 -c \\xA0", true, "\\x90 \\xA0:\\xB0 []    \\xC0");
-    exec("scan -b \\x90 -e \\x90 -c \\xA0:\\xB0", true, "\\x90 \\xA0:\\xB0 []    \\xC0");
-    exec("scan -b \\x90 -be", true, "\\x90 \\xA0:\\xB0 []    \\xC0", false);
-    exec("scan -e \\x90 -ee", true, "\\x90 \\xA0:\\xB0 []    \\xC0", false);
-    exec("scan -b \\x90\\x00", true, "\\x90 \\xA0:\\xB0 []    \\xC0", false);
-    exec("scan -e \\x8f", true, "\\x90 \\xA0:\\xB0 []    \\xC0", false);
-    exec("delete \\x90 \\xa0 \\xb0", true);
-    exec("scan", true, "\\x90 \\xA0:\\xB0 []    \\xC0", false);
-    exec("deletetable test -f", true, "Table: [test] has been deleted");
-  }
-
-  @Test
-  public void deleteManyTest() throws IOException {
-    exec("deletemany", false, "java.lang.IllegalStateException: Not in a table context");
-    exec("createtable test", true);
-    exec("deletemany", true, "\n");
-
-    exec("insert 0 0 0 0 -ts 0");
-    exec("insert 0 0 0 0 -l 0 -ts 0");
-    exec("insert 1 1 1 1 -ts 1");
-    exec("insert 2 2 2 2 -ts 2");
-
-    // prompts for delete, and rejects by default
-    exec("deletemany", true, "[SKIPPED] 0 0:0 []");
-    exec("deletemany -r 0", true, "[SKIPPED] 0 0:0 []");
-    exec("deletemany -r 0 -f", true, "[DELETED] 0 0:0 []");
-
-    // with auths, can delete the other record
-    exec("setauths -s 0");
-    exec("deletemany -r 0 -f", true, "[DELETED] 0 0:0 [0]");
-
-    // delete will show the timestamp
-    exec("deletemany -r 1 -f -st", true, "[DELETED] 1 1:1 [] 1");
-
-    // DeleteManyCommand has its own Formatter (DeleterFormatter), so it does not honor the -fm flag
-    exec("deletemany -r 2 -f -st -fm org.apache.accumulo.core.util.format.DateStringFormatter", true, "[DELETED] 2 2:2 [] 2");
-
-    exec("setauths -c ", true);
-    exec("deletetable test -f", true, "Table: [test] has been deleted");
-  }
-
-  @Test
-  public void authsTest() throws Exception {
-    Shell.log.debug("Starting auths test --------------------------");
-    exec("setauths x,y,z", false, "Missing required option");
-    exec("setauths -s x,y,z -u notauser", false, "user does not exist");
-    exec("setauths -s y,z,x", true);
-    exec("getauths -u notauser", false, "user does not exist");
-    execExpectList("getauths", true, Arrays.asList("x", "y", "z"));
-    exec("addauths -u notauser", false, "Missing required option");
-    exec("addauths -u notauser -s foo", false, "user does not exist");
-    exec("addauths -s a", true);
-    execExpectList("getauths", true, Arrays.asList("x", "y", "z", "a"));
-    exec("setauths -c", true);
-  }
-
-  @Test
-  public void userTest() throws Exception {
-    Shell.log.debug("Starting user test --------------------------");
-    // Test cannot be done via junit because createuser only prompts for password
-    // exec("createuser root", false, "user exists");
-  }
-
-  @Test
-  public void duContextTest() throws Exception {
-    Shell.log.debug("Starting du context test --------------------------");
-    exec("createtable t", true);
-    exec("du", true, "0 [t]");
-    exec("deletetable t -f", true, "Table: [t] has been deleted");
-  }
-
-  @Test
-  public void duTest() throws IOException {
-    Shell.log.debug("Starting DU test --------------------------");
-    exec("createtable t", true);
-    exec("du t", true, "0 [t]");
-    exec("deletetable t -f", true, "Table: [t] has been deleted");
-  }
-
-  @Test
-  public void duPatternTest() throws IOException {
-    Shell.log.debug("Starting DU with pattern test --------------------------");
-    exec("createtable t", true);
-    exec("createtable tt", true);
-    exec("du -p t.*", true, "0 [t, tt]");
-    exec("deletetable t -f", true, "Table: [t] has been deleted");
-    exec("deletetable tt -f", true, "Table: [tt] has been deleted");
-  }
-
-  @Test
-  public void scanTimestampTest() throws IOException {
-    Shell.log.debug("Starting scanTimestamp test ------------------------");
-    exec("createtable test", true);
-    exec("insert r f q v -ts 0", true);
-    exec("scan -st", true, "r f:q [] 0    v");
-    exec("scan -st -f 0", true, " : [] 0   ");
-    exec("deletemany -f", true);
-    exec("deletetable test -f", true, "Table: [test] has been deleted");
-  }
-
-  @Test
-  public void scanFewTest() throws IOException {
-    Shell.log.debug("Starting scanFew test ------------------------");
-    exec("createtable test", true);
-    // historically, showing few did not pertain to ColVis or Timestamp
-    exec("insert 1 123 123456 -l '12345678' -ts 123456789 1234567890", true);
-    exec("setauths -s 12345678", true);
-    String expected = "1 123:123456 [12345678] 123456789    1234567890";
-    String expectedFew = "1 123:12345 [12345678] 123456789    12345";
-    exec("scan -st", true, expected);
-    exec("scan -st -f 5", true, expectedFew);
-    // also prove that BinaryFormatter behaves same as the default
-    exec("scan -st -fm org.apache.accumulo.core.util.format.BinaryFormatter", true, expected);
-    exec("scan -st -f 5 -fm org.apache.accumulo.core.util.format.BinaryFormatter", true, expectedFew);
-    exec("setauths -c", true);
-    exec("deletetable test -f", true, "Table: [test] has been deleted");
-  }
-
-  @Test
-  public void scanDateStringFormatterTest() throws IOException {
-    Shell.log.debug("Starting scan dateStringFormatter test --------------------------");
-    exec("createtable t", true);
-    exec("insert r f q v -ts 0", true);
-    @SuppressWarnings("deprecation")
-    DateFormat dateFormat = new SimpleDateFormat(org.apache.accumulo.core.util.format.DateStringFormatter.DATE_FORMAT);
-    String expected = String.format("r f:q [] %s    v", dateFormat.format(new Date(0)));
-    // historically, showing few did not pertain to ColVis or Timestamp
-    String expectedFew = expected;
-    String expectedNoTimestamp = String.format("r f:q []    v");
-    exec("scan -fm org.apache.accumulo.core.util.format.DateStringFormatter -st", true, expected);
-    exec("scan -fm org.apache.accumulo.core.util.format.DateStringFormatter -st -f 1000", true, expected);
-    exec("scan -fm org.apache.accumulo.core.util.format.DateStringFormatter -st -f 5", true, expectedFew);
-    exec("scan -fm org.apache.accumulo.core.util.format.DateStringFormatter", true, expectedNoTimestamp);
-    exec("deletetable t -f", true, "Table: [t] has been deleted");
-  }
-
-  @Test
-  public void grepTest() throws IOException {
-    Shell.log.debug("Starting grep test --------------------------");
-    exec("grep", false, "java.lang.IllegalStateException: Not in a table context");
-    exec("createtable t", true);
-    exec("setauths -s vis", true);
-    exec("insert r f q v -ts 0 -l vis", true);
-
-    String expected = "r f:q [vis]    v";
-    String expectedTimestamp = "r f:q [vis] 0    v";
-    exec("grep", false, "No terms specified");
-    exec("grep non_matching_string", true, "");
-    // historically, showing few did not pertain to ColVis or Timestamp
-    exec("grep r", true, expected);
-    exec("grep r -f 1", true, expected);
-    exec("grep r -st", true, expectedTimestamp);
-    exec("grep r -st -f 1", true, expectedTimestamp);
-    exec("setauths -c", true);
-    exec("deletetable t -f", true, "Table: [t] has been deleted");
-  }
-
-  @Test
-  public void commentTest() throws IOException {
-    Shell.log.debug("Starting comment test --------------------------");
-    exec("#", true, "Unknown command", false);
-    exec("# foo", true, "Unknown command", false);
-    exec("- foo", true, "Unknown command", true);
-  }
-
-  @Test
-  public void execFileTest() throws IOException {
-    Shell.log.debug("Starting exec file test --------------------------");
-    shell.config("--config-file", config.toString(), "--fake", "-u", "test", "-p", "secret", "-f", "src/test/resources/shelltest.txt");
-    assertEquals(0, shell.start());
-    assertGoodExit("Unknown command", false);
-  }
-
-  @Test
-  public void setIterTest() throws IOException {
-    Shell.log.debug("Starting setiter test --------------------------");
-    exec("createtable t", true);
-
-    String cmdJustClass = "setiter -class VersioningIterator -p 1";
-    exec(cmdJustClass, false, "java.lang.IllegalArgumentException", false);
-    exec(cmdJustClass, false, "fully qualified package name", true);
-
-    String cmdFullPackage = "setiter -class o.a.a.foo -p 1";
-    exec(cmdFullPackage, false, "java.lang.IllegalArgumentException", false);
-    exec(cmdFullPackage, false, "class not found", true);
-
-    String cmdNoOption = "setiter -class java.lang.String -p 1";
-    exec(cmdNoOption, false, "loaded successfully but does not implement SortedKeyValueIterator", true);
-
-    input.set("\n\n");
-    exec("setiter -scan -class org.apache.accumulo.core.iterators.ColumnFamilyCounter -p 30 -name foo", true);
-
-    input.set("bar\nname value\n");
-    exec("setiter -scan -class org.apache.accumulo.core.iterators.ColumnFamilyCounter -p 31", true);
-
-    // TODO can't verify this as config -t fails, functionality verified in ShellServerIT
-
-    exec("deletetable t -f", true, "Table: [t] has been deleted");
-  }
-}
diff --git a/shell/src/test/java/org/apache/accumulo/shell/ShellUtilTest.java b/shell/src/test/java/org/apache/accumulo/shell/ShellUtilTest.java
index 28e56ab..00b5205 100644
--- a/shell/src/test/java/org/apache/accumulo/shell/ShellUtilTest.java
+++ b/shell/src/test/java/org/apache/accumulo/shell/ShellUtilTest.java
@@ -22,9 +22,9 @@
 import java.io.File;
 import java.io.FileNotFoundException;
 import java.io.IOException;
+import java.util.Base64;
 import java.util.List;
 
-import org.apache.accumulo.core.util.Base64;
 import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.io.Text;
 import org.junit.Rule;
@@ -40,7 +40,8 @@
 
   // String with 3 lines, with one empty line
   private static final String FILEDATA = "line1\n\nline2";
-  private static final String B64_FILEDATA = Base64.encodeBase64String("line1".getBytes(UTF_8)) + "\n\n" + Base64.encodeBase64String("line2".getBytes(UTF_8));
+  private static final String B64_FILEDATA = Base64.getEncoder().encodeToString("line1".getBytes(UTF_8)) + "\n\n"
+      + Base64.getEncoder().encodeToString("line2".getBytes(UTF_8));
 
   @Test
   public void testWithoutDecode() throws IOException {
diff --git a/start/pom.xml b/start/pom.xml
index bc6c4b6..98d6815 100644
--- a/start/pom.xml
+++ b/start/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-start</artifactId>
   <name>Apache Accumulo Start</name>
@@ -96,6 +96,7 @@
     <pluginManagement>
       <plugins>
         <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-surefire-plugin</artifactId>
           <configuration>
             <forkCount>1</forkCount>
diff --git a/start/src/main/java/org/apache/accumulo/start/classloader/vfs/ContextManager.java b/start/src/main/java/org/apache/accumulo/start/classloader/vfs/ContextManager.java
index 7145b4a..ffb7dc1 100644
--- a/start/src/main/java/org/apache/accumulo/start/classloader/vfs/ContextManager.java
+++ b/start/src/main/java/org/apache/accumulo/start/classloader/vfs/ContextManager.java
@@ -25,9 +25,13 @@
 
 import org.apache.commons.vfs2.FileSystemException;
 import org.apache.commons.vfs2.FileSystemManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class ContextManager {
 
+  private static final Logger log = LoggerFactory.getLogger(ContextManager.class);
+
   // there is a lock per context so that one context can initialize w/o blocking another context
   private class Context {
     AccumuloReloadingVFSClassLoader loader;
@@ -202,9 +206,10 @@
       contexts.keySet().removeAll(unused.keySet());
     }
 
-    for (Context context : unused.values()) {
+    for (Entry<String,Context> e : unused.entrySet()) {
       // close outside of lock
-      context.close();
+      log.info("Closing unused context: {}", e.getKey());
+      e.getValue().close();
     }
   }
 }
diff --git a/test/pom.xml b/test/pom.xml
index 92aeeaf..c88df40 100644
--- a/test/pom.xml
+++ b/test/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
+    <version>2.0.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-test</artifactId>
   <name>Apache Accumulo Testing</name>
@@ -125,10 +125,6 @@
     </dependency>
     <dependency>
       <groupId>org.apache.accumulo</groupId>
-      <artifactId>accumulo-trace</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.accumulo</groupId>
       <artifactId>accumulo-tracer</artifactId>
     </dependency>
     <dependency>
diff --git a/test/src/main/java/org/apache/accumulo/test/BulkImportDirectory.java b/test/src/main/java/org/apache/accumulo/test/BulkImportDirectory.java
index 4cbba8e..1022f6a 100644
--- a/test/src/main/java/org/apache/accumulo/test/BulkImportDirectory.java
+++ b/test/src/main/java/org/apache/accumulo/test/BulkImportDirectory.java
@@ -16,8 +16,6 @@
  */
 package org.apache.accumulo.test;
 
-import static java.nio.charset.StandardCharsets.UTF_8;
-
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.List;
@@ -25,10 +23,8 @@
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.accumulo.server.cli.ClientOnRequiredTable;
-import org.apache.accumulo.server.client.HdfsZooInstance;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 
@@ -47,22 +43,9 @@
   public static void main(String[] args) throws IOException, AccumuloException, AccumuloSecurityException, TableNotFoundException {
     final FileSystem fs = FileSystem.get(CachedConfiguration.getInstance());
     Opts opts = new Opts();
-    if (args.length == 5) {
-      System.err.println("Deprecated syntax for BulkImportDirectory, please use the new style (see --help)");
-      final String user = args[0];
-      final byte[] pass = args[1].getBytes(UTF_8);
-      final String tableName = args[2];
-      final String dir = args[3];
-      final String failureDir = args[4];
-      final Path failureDirPath = new Path(failureDir);
-      fs.delete(failureDirPath, true);
-      fs.mkdirs(failureDirPath);
-      HdfsZooInstance.getInstance().getConnector(user, new PasswordToken(pass)).tableOperations().importDirectory(tableName, dir, failureDir, false);
-    } else {
-      opts.parseArgs(BulkImportDirectory.class.getName(), args);
-      fs.delete(new Path(opts.failures), true);
-      fs.mkdirs(new Path(opts.failures));
-      opts.getConnector().tableOperations().importDirectory(opts.getTableName(), opts.source, opts.failures, false);
-    }
+    opts.parseArgs(BulkImportDirectory.class.getName(), args);
+    fs.delete(new Path(opts.failures), true);
+    fs.mkdirs(new Path(opts.failures));
+    opts.getConnector().tableOperations().importDirectory(opts.getTableName(), opts.source, opts.failures, false);
   }
 }
diff --git a/test/src/main/java/org/apache/accumulo/test/CreateTableWithNewTableConfigIT.java b/test/src/main/java/org/apache/accumulo/test/CreateTableWithNewTableConfigIT.java
index 7fd2dd1..eefa604 100644
--- a/test/src/main/java/org/apache/accumulo/test/CreateTableWithNewTableConfigIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/CreateTableWithNewTableConfigIT.java
@@ -102,7 +102,6 @@
     return false;
   }
 
-  @SuppressWarnings("deprecation")
   @Test
   public void tableNameOnly() throws Exception {
     log.info("Starting tableNameOnly");
@@ -113,7 +112,7 @@
     connector.tableOperations().create(tableName, new NewTableConfiguration());
 
     String tableNameOrig = "original";
-    connector.tableOperations().create(tableNameOrig, true);
+    connector.tableOperations().create(tableNameOrig);
 
     int countNew = numProperties(connector, tableName);
     int countOrig = compareProperties(connector, tableNameOrig, tableName, null);
@@ -122,50 +121,6 @@
     Assert.assertTrue("Wrong TimeType", checkTimeType(connector, tableName, TimeType.MILLIS));
   }
 
-  @SuppressWarnings("deprecation")
-  @Test
-  public void tableNameAndLimitVersion() throws Exception {
-    log.info("Starting tableNameAndLimitVersion");
-
-    // Create a table with the initial properties
-    Connector connector = getConnector();
-    String tableName = getUniqueNames(2)[0];
-    boolean limitVersion = false;
-    connector.tableOperations().create(tableName, new NewTableConfiguration().withoutDefaultIterators());
-
-    String tableNameOrig = "originalWithLimitVersion";
-    connector.tableOperations().create(tableNameOrig, limitVersion);
-
-    int countNew = numProperties(connector, tableName);
-    int countOrig = compareProperties(connector, tableNameOrig, tableName, null);
-
-    Assert.assertEquals("Extra properties using the new create method", countOrig, countNew);
-    Assert.assertTrue("Wrong TimeType", checkTimeType(connector, tableName, TimeType.MILLIS));
-  }
-
-  @SuppressWarnings("deprecation")
-  @Test
-  public void tableNameLimitVersionAndTimeType() throws Exception {
-    log.info("Starting tableNameLimitVersionAndTimeType");
-
-    // Create a table with the initial properties
-    Connector connector = getConnector();
-    String tableName = getUniqueNames(2)[0];
-    boolean limitVersion = false;
-    TimeType tt = TimeType.LOGICAL;
-    connector.tableOperations().create(tableName, new NewTableConfiguration().withoutDefaultIterators().setTimeType(tt));
-
-    String tableNameOrig = "originalWithLimitVersionAndTimeType";
-    connector.tableOperations().create(tableNameOrig, limitVersion, tt);
-
-    int countNew = numProperties(connector, tableName);
-    int countOrig = compareProperties(connector, tableNameOrig, tableName, null);
-
-    Assert.assertEquals("Extra properties using the new create method", countOrig, countNew);
-    Assert.assertTrue("Wrong TimeType", checkTimeType(connector, tableName, tt));
-  }
-
-  @SuppressWarnings("deprecation")
   @Test
   public void addCustomPropAndChangeExisting() throws Exception {
     log.info("Starting addCustomPropAndChangeExisting");
@@ -186,7 +141,7 @@
     connector.tableOperations().create(tableName, new NewTableConfiguration().setProperties(properties));
 
     String tableNameOrig = "originalWithTableName";
-    connector.tableOperations().create(tableNameOrig, true);
+    connector.tableOperations().create(tableNameOrig);
 
     int countNew = numProperties(connector, tableName);
     int countOrig = compareProperties(connector, tableNameOrig, tableName, propertyName);
diff --git a/test/src/main/java/org/apache/accumulo/test/ShellServerIT.java b/test/src/main/java/org/apache/accumulo/test/ShellServerIT.java
index 61d3d4a..765ddf6 100644
--- a/test/src/main/java/org/apache/accumulo/test/ShellServerIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/ShellServerIT.java
@@ -40,6 +40,7 @@
 import java.util.Map.Entry;
 import java.util.Random;
 import java.util.concurrent.TimeUnit;
+import java.util.stream.Stream;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.ClientConfiguration;
@@ -152,7 +153,7 @@
     public StringInputStream input;
     public Shell shell;
 
-    TestShell(String user, String rootPass, String instanceName, String zookeepers, File configFile) throws IOException {
+    TestShell(String user, String rootPass, String instanceName, String zookeepers, File configFile, String... extraArgs) throws IOException {
       ClientConfiguration clientConf;
       try {
         clientConf = new ClientConfiguration(configFile);
@@ -164,12 +165,14 @@
       input = new StringInputStream();
       shell = new Shell(new ConsoleReader(input, output));
       shell.setLogErrorsToConsole();
+      String[] shellArgs = null;
       if (clientConf.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
         // Pull the kerberos principal out when we're using SASL
-        shell.config("-u", user, "-z", instanceName, zookeepers, "--config-file", configFile.getAbsolutePath());
+        shellArgs = new String[] {"-u", user, "-z", instanceName, zookeepers, "--config-file", configFile.getAbsolutePath()};
       } else {
-        shell.config("-u", user, "-p", rootPass, "-z", instanceName, zookeepers, "--config-file", configFile.getAbsolutePath());
+        shellArgs = new String[] {"-u", user, "-p", rootPass, "-z", instanceName, zookeepers, "--config-file", configFile.getAbsolutePath()};
       }
+      shell.config(Stream.concat(Arrays.stream(shellArgs), Arrays.stream(extraArgs)).toArray(String[]::new));
       exec("quit", true);
       shell.start();
       shell.setExit(false);
@@ -230,6 +233,10 @@
         assertEquals(s + " present in " + output.get() + " was not " + stringPresent, stringPresent, output.get().contains(s));
     }
 
+    void assertBadExit(String s, boolean stringPresent) {
+      assertBadExit(s, stringPresent, noop);
+    }
+
     void assertBadExit(String s, boolean stringPresent, ErrorMessageCallback callback) {
       shellLog.debug(output.get());
       if (0 == shell.getExitCode()) {
@@ -241,6 +248,19 @@
         assertEquals(s + " present in " + output.get() + " was not " + stringPresent, stringPresent, output.get().contains(s));
       shell.resetExitCode();
     }
+
+    void execExpectList(String cmd, boolean expecteGoodExit, List<String> expectedStrings) throws IOException {
+      exec(cmd);
+      if (expecteGoodExit) {
+        assertGoodExit("", true);
+      } else {
+        assertBadExit("", true);
+      }
+
+      for (String expectedString : expectedStrings) {
+        assertTrue(expectedString + " was not present in " + output.get(), output.get().contains(expectedString));
+      }
+    }
   }
 
   private static final NoOpErrorMessageCallback noop = new NoOpErrorMessageCallback();
@@ -394,24 +414,6 @@
   }
 
   @Test
-  public void setscaniterDeletescaniter() throws Exception {
-    final String table = name.getMethodName();
-
-    // setscaniter, deletescaniter
-    ts.exec("createtable " + table);
-    ts.exec("insert a cf cq 1");
-    ts.exec("insert a cf cq 1");
-    ts.exec("insert a cf cq 1");
-    ts.input.set("true\n\n\n\nSTRING");
-    ts.exec("setscaniter -class org.apache.accumulo.core.iterators.user.SummingCombiner -p 10 -n name", true);
-    ts.exec("scan", true, "3", true);
-    ts.exec("deletescaniter -n name", true);
-    ts.exec("scan", true, "1", true);
-    ts.exec("deletetable -f " + table);
-
-  }
-
-  @Test
   public void execfile() throws Exception {
     // execfile
     File file = File.createTempFile("ShellServerIT.execfile", ".conf", new File(rootPath));
@@ -1315,7 +1317,7 @@
   public void help() throws Exception {
     ts.exec("help -np", true, "Help Commands", true);
     ts.exec("?", true, "Help Commands", true);
-    for (String c : ("bye exit quit " + "about help info ? " + "deleteiter deletescaniter listiter setiter setscaniter "
+    for (String c : ("bye exit quit " + "about help info ? " + "deleteiter listiter setiter "
         + "grant revoke systempermissions tablepermissions userpermissions " + "execfile history " + "authenticate cls clear notable sleep table user whoami "
         + "clonetable config createtable deletetable droptable du exporttable importtable offline online renametable tables "
         + "addsplits compact constraint flush getgropus getsplits merge setgroups " + "addauths createuser deleteuser dropuser getauths passwd setauths users "
@@ -1745,54 +1747,61 @@
       assertTrue(true);
     }
     ts.exec("createtable t");
+    // Assert that the TabletServer does not know anything about our class
+    String result = ts.exec("setiter -scan -n reverse -t t -p 21 -class org.apache.accumulo.test.functional.ValueReversingIterator");
+    assertTrue(result.contains("class not found"));
     make10();
     setupFakeContextPath();
-    // Add the context to the table so that setscaniter works. After setscaniter succeeds, then
-    // remove the property from the table.
-    ts.exec("config -s " + Property.VFS_CONTEXT_CLASSPATH_PROPERTY + FAKE_CONTEXT + "=" + FAKE_CONTEXT_CLASSPATH);
-    ts.exec("config -t t -s table.classpath.context=" + FAKE_CONTEXT);
-    ts.exec("setscaniter -n reverse -t t -p 21 -class org.apache.accumulo.test.functional.ValueReversingIterator");
-    String result = ts.exec("scan -np -b row1 -e row1");
+    // Add the context to the table so that setiter works.
+    result = ts.exec("config -s " + Property.VFS_CONTEXT_CLASSPATH_PROPERTY + FAKE_CONTEXT + "=" + FAKE_CONTEXT_CLASSPATH);
+    assertEquals("root@miniInstance t> config -s " + Property.VFS_CONTEXT_CLASSPATH_PROPERTY + FAKE_CONTEXT + "=" + FAKE_CONTEXT_CLASSPATH + "\n", result);
+    result = ts.exec("config -t t -s table.classpath.context=" + FAKE_CONTEXT);
+    assertEquals("root@miniInstance t> config -t t -s table.classpath.context=" + FAKE_CONTEXT + "\n", result);
+    result = ts.exec("setshelliter -pn baz -n reverse -t t -p 21 -class org.apache.accumulo.test.functional.ValueReversingIterator");
+    assertTrue(result.contains("The iterator class does not implement OptionDescriber"));
+    // The implementation of ValueReversingIterator in the FAKE context does nothing, the value is not reversed.
+    result = ts.exec("scan -pn baz -np -b row1 -e row1");
     assertEquals(2, result.split("\n").length);
-    log.error(result);
     assertTrue(result.contains("value"));
-    result = ts.exec("scan -np -b row3 -e row5");
+    result = ts.exec("scan -pn baz -np -b row3 -e row5");
     assertEquals(4, result.split("\n").length);
     assertTrue(result.contains("value"));
-    result = ts.exec("scan -np -r row3");
+    result = ts.exec("scan -pn baz -np -r row3");
     assertEquals(2, result.split("\n").length);
     assertTrue(result.contains("value"));
-    result = ts.exec("scan -np -b row:");
+    result = ts.exec("scan -pn baz -np -b row:");
     assertEquals(1, result.split("\n").length);
-    result = ts.exec("scan -np -b row");
+    result = ts.exec("scan -pn baz -np -b row");
     assertEquals(11, result.split("\n").length);
     assertTrue(result.contains("value"));
-    result = ts.exec("scan -np -e row:");
+    result = ts.exec("scan -pn baz -np -e row:");
     assertEquals(11, result.split("\n").length);
     assertTrue(result.contains("value"));
 
     setupRealContextPath();
-    ts.exec("config -s " + Property.VFS_CONTEXT_CLASSPATH_PROPERTY + REAL_CONTEXT + "=" + REAL_CONTEXT_CLASSPATH);
-    result = ts.exec("scan -np -b row1 -e row1 -cc " + REAL_CONTEXT);
-    log.error(result);
+    // Define a new classloader context, but don't set it on the table
+    result = ts.exec("config -s " + Property.VFS_CONTEXT_CLASSPATH_PROPERTY + REAL_CONTEXT + "=" + REAL_CONTEXT_CLASSPATH);
+    assertEquals("root@miniInstance t> config -s " + Property.VFS_CONTEXT_CLASSPATH_PROPERTY + REAL_CONTEXT + "=" + REAL_CONTEXT_CLASSPATH + "\n", result);
+    // Override the table classloader context with the REAL implementation of ValueReversingIterator, which does reverse the value.
+    result = ts.exec("scan -pn baz -np -b row1 -e row1 -cc " + REAL_CONTEXT);
     assertEquals(2, result.split("\n").length);
     assertTrue(result.contains("eulav"));
     assertFalse(result.contains("value"));
-    result = ts.exec("scan -np -b row3 -e row5 -cc " + REAL_CONTEXT);
+    result = ts.exec("scan -pn baz -np -b row3 -e row5 -cc " + REAL_CONTEXT);
     assertEquals(4, result.split("\n").length);
     assertTrue(result.contains("eulav"));
     assertFalse(result.contains("value"));
-    result = ts.exec("scan -np -r row3 -cc " + REAL_CONTEXT);
+    result = ts.exec("scan -pn baz -np -r row3 -cc " + REAL_CONTEXT);
     assertEquals(2, result.split("\n").length);
     assertTrue(result.contains("eulav"));
     assertFalse(result.contains("value"));
-    result = ts.exec("scan -np -b row: -cc " + REAL_CONTEXT);
+    result = ts.exec("scan -pn baz -np -b row: -cc " + REAL_CONTEXT);
     assertEquals(1, result.split("\n").length);
-    result = ts.exec("scan -np -b row -cc " + REAL_CONTEXT);
+    result = ts.exec("scan -pn baz -np -b row -cc " + REAL_CONTEXT);
     assertEquals(11, result.split("\n").length);
     assertTrue(result.contains("eulav"));
     assertFalse(result.contains("value"));
-    result = ts.exec("scan -np -e row: -cc " + REAL_CONTEXT);
+    result = ts.exec("scan -pn baz -np -e row: -cc " + REAL_CONTEXT);
     assertEquals(11, result.split("\n").length);
     assertTrue(result.contains("eulav"));
     assertFalse(result.contains("value"));
@@ -1847,6 +1856,204 @@
     }
   }
 
+  @Test
+  public void aboutTest() throws IOException {
+    ts.exec("about", true, "Shell - Apache Accumulo Interactive Shell");
+    ts.exec("about -v", true, "Current user:");
+    ts.exec("about arg", false, "java.lang.IllegalArgumentException: Expected 0 arguments");
+  }
+
+  @Test
+  public void addGetSplitsTest() throws IOException {
+    ts.exec("addsplits arg", false, "java.lang.IllegalStateException: Not in a table context");
+    ts.exec("createtable test", true);
+    ts.exec("addsplits 1 \\x80", true);
+    ts.exec("getsplits", true, "1\n\\x80");
+    ts.exec("getsplits -m 1", true, "1");
+    ts.exec("getsplits -b64", true, "MQ==\ngA==");
+    ts.exec("deletetable test -f", true, "Table: [test] has been deleted");
+  }
+
+  @Test
+  public void insertDeleteScanTest() throws IOException {
+    ts.exec("insert r f q v", false, "java.lang.IllegalStateException: Not in a table context");
+    ts.exec("delete r f q", false, "java.lang.IllegalStateException: Not in a table context");
+    ts.exec("createtable test", true);
+    ts.exec("insert r f q v", true);
+    ts.exec("scan", true, "r f:q []    v");
+    ts.exec("delete r f q", true);
+    ts.exec("scan", true, "r f:q []    v", false);
+    ts.exec("insert \\x90 \\xa0 \\xb0 \\xc0\\xd0\\xe0\\xf0", true);
+    ts.exec("scan", true, "\\x90 \\xA0:\\xB0 []    \\xC0\\xD0");
+    ts.exec("scan -f 2", true, "\\x90 \\xA0:\\xB0 []    \\xC0\\xD0");
+    ts.exec("scan -f 2", true, "\\x90 \\xA0:\\xB0 []    \\xC0\\xD0\\xE0", false);
+    ts.exec("scan -b \\x90 -e \\x90 -c \\xA0", true, "\\x90 \\xA0:\\xB0 []    \\xC0");
+    ts.exec("scan -b \\x90 -e \\x90 -c \\xA0:\\xB0", true, "\\x90 \\xA0:\\xB0 []    \\xC0");
+    ts.exec("scan -b \\x90 -be", true, "\\x90 \\xA0:\\xB0 []    \\xC0", false);
+    ts.exec("scan -e \\x90 -ee", true, "\\x90 \\xA0:\\xB0 []    \\xC0", false);
+    ts.exec("scan -b \\x90\\x00", true, "\\x90 \\xA0:\\xB0 []    \\xC0", false);
+    ts.exec("scan -e \\x8f", true, "\\x90 \\xA0:\\xB0 []    \\xC0", false);
+    ts.exec("delete \\x90 \\xa0 \\xb0", true);
+    ts.exec("scan", true, "\\x90 \\xA0:\\xB0 []    \\xC0", false);
+    ts.exec("deletetable test -f", true, "Table: [test] has been deleted");
+  }
+
+  @Test
+  public void deleteManyTest() throws IOException {
+    ts.exec("deletemany", false, "java.lang.IllegalStateException: Not in a table context");
+    ts.exec("createtable test", true);
+    ts.exec("deletemany", true, "\n");
+
+    ts.exec("insert 0 0 0 0 -ts 0");
+    ts.exec("insert 0 0 0 0 -l 0 -ts 0");
+    ts.exec("insert 1 1 1 1 -ts 1");
+    ts.exec("insert 2 2 2 2 -ts 2");
+
+    // prompts for delete, and rejects by default
+    ts.exec("deletemany", true, "[SKIPPED] 0 0:0 []");
+    ts.exec("deletemany -r 0", true, "[SKIPPED] 0 0:0 []");
+    ts.exec("deletemany -r 0 -f", true, "[DELETED] 0 0:0 []");
+
+    // with auths, can delete the other record
+    ts.exec("setauths -s 0");
+    ts.exec("deletemany -r 0 -f", true, "[DELETED] 0 0:0 [0]");
+
+    // delete will show the timestamp
+    ts.exec("deletemany -r 1 -f -st", true, "[DELETED] 1 1:1 [] 1");
+
+    // DeleteManyCommand has its own Formatter (DeleterFormatter), so it does not honor the -fm flag
+    ts.exec("deletemany -r 2 -f -st -fm org.apache.accumulo.core.util.format.DateStringFormatter", true, "[DELETED] 2 2:2 [] 2");
+
+    ts.exec("setauths -c ", true);
+    ts.exec("deletetable test -f", true, "Table: [test] has been deleted");
+  }
+
+  @Test
+  public void authsTest() throws Exception {
+    ts.exec("setauths x,y,z", false, "Missing required option");
+    ts.exec("setauths -s x,y,z -u notauser", false, "user does not exist");
+    ts.exec("setauths -s y,z,x", true);
+    ts.exec("getauths -u notauser", false, "user does not exist");
+    ts.execExpectList("getauths", true, Arrays.asList("x", "y", "z"));
+    ts.exec("addauths -u notauser", false, "Missing required option");
+    ts.exec("addauths -u notauser -s foo", false, "user does not exist");
+    ts.exec("addauths -s a", true);
+    ts.execExpectList("getauths", true, Arrays.asList("x", "y", "z", "a"));
+    ts.exec("setauths -c", true);
+  }
+
+  @Test
+  public void duContextTest() throws Exception {
+    ts.exec("createtable t", true);
+    ts.exec("du", true, "0 [t]");
+    ts.exec("deletetable t -f", true, "Table: [t] has been deleted");
+  }
+
+  @Test
+  public void duTest() throws IOException {
+    ts.exec("createtable t", true);
+    ts.exec("du t", true, "0 [t]");
+    ts.exec("deletetable t -f", true, "Table: [t] has been deleted");
+  }
+
+  @Test
+  public void duPatternTest() throws IOException {
+    ts.exec("createnamespace n", true);
+    ts.exec("createtable n.t", true);
+    ts.exec("createtable n.tt", true);
+    ts.exec("du -p n[.]t.*", true, "0 [n.t, n.tt]");
+    ts.exec("deletetable n.t -f", true, "Table: [n.t] has been deleted");
+    ts.exec("deletetable n.tt -f", true, "Table: [n.tt] has been deleted");
+    ts.exec("deletenamespace -f n", true);
+  }
+
+  @Test
+  public void scanTimestampTest() throws IOException {
+    ts.exec("createtable test", true);
+    ts.exec("insert r f q v -ts 0", true);
+    ts.exec("scan -st", true, "r f:q [] 0    v");
+    ts.exec("scan -st -f 0", true, " : [] 0   ");
+    ts.exec("deletemany -f", true);
+    ts.exec("deletetable test -f", true, "Table: [test] has been deleted");
+  }
+
+  @Test
+  public void scanFewTest() throws IOException {
+    ts.exec("createtable test", true);
+    // historically, showing few did not pertain to ColVis or Timestamp
+    ts.exec("insert 1 123 123456 -l '12345678' -ts 123456789 1234567890", true);
+    ts.exec("setauths -s 12345678", true);
+    String expected = "1 123:123456 [12345678] 123456789    1234567890";
+    String expectedFew = "1 123:12345 [12345678] 123456789    12345";
+    ts.exec("scan -st", true, expected);
+    ts.exec("scan -st -f 5", true, expectedFew);
+    // also prove that BinaryFormatter behaves same as the default
+    ts.exec("scan -st -fm org.apache.accumulo.core.util.format.BinaryFormatter", true, expected);
+    ts.exec("scan -st -f 5 -fm org.apache.accumulo.core.util.format.BinaryFormatter", true, expectedFew);
+    ts.exec("setauths -c", true);
+    ts.exec("deletetable test -f", true, "Table: [test] has been deleted");
+  }
+
+  @Test
+  public void grepTest() throws IOException {
+    ts.exec("grep", false, "java.lang.IllegalStateException: Not in a table context");
+    ts.exec("createtable t", true);
+    ts.exec("setauths -s vis", true);
+    ts.exec("insert r f q v -ts 0 -l vis", true);
+
+    String expected = "r f:q [vis]    v";
+    String expectedTimestamp = "r f:q [vis] 0    v";
+    ts.exec("grep", false, "No terms specified");
+    ts.exec("grep non_matching_string", true, "");
+    // historically, showing few did not pertain to ColVis or Timestamp
+    ts.exec("grep r", true, expected);
+    ts.exec("grep r -f 1", true, expected);
+    ts.exec("grep r -st", true, expectedTimestamp);
+    ts.exec("grep r -st -f 1", true, expectedTimestamp);
+    ts.exec("setauths -c", true);
+    ts.exec("deletetable t -f", true, "Table: [t] has been deleted");
+  }
+
+  @Test
+  public void commentTest() throws IOException {
+    ts.exec("#", true, "Unknown command", false);
+    ts.exec("# foo", true, "Unknown command", false);
+    ts.exec("- foo", true, "Unknown command", true);
+  }
+
+  @Test
+  public void execFileTest() throws IOException {
+    TestShell configTestShell = new TestShell(getPrincipal(), getRootPassword(), getCluster().getConfig().getInstanceName(), getCluster().getConfig()
+        .getZooKeepers(), getCluster().getConfig().getClientConfFile(), "-f", "src/test/resources/shelltest.txt");
+    configTestShell.assertGoodExit("Unknown command", false);
+  }
+
+  @Test
+  public void setIterTest() throws IOException {
+    ts.exec("createtable t", true);
+
+    String cmdJustClass = "setiter -class VersioningIterator -p 1";
+    ts.exec(cmdJustClass, false, "java.lang.IllegalArgumentException", false);
+    ts.exec(cmdJustClass, false, "fully qualified package name", true);
+
+    String cmdFullPackage = "setiter -class o.a.a.foo -p 1";
+    ts.exec(cmdFullPackage, false, "java.lang.IllegalArgumentException", false);
+    ts.exec(cmdFullPackage, false, "class not found", true);
+
+    String cmdNoOption = "setiter -class java.lang.String -p 1";
+    ts.exec(cmdNoOption, false, "loaded successfully but does not implement SortedKeyValueIterator", true);
+
+    ts.input.set("\n\n");
+    ts.exec("setiter -scan -class org.apache.accumulo.core.iterators.ColumnFamilyCounter -p 30 -name foo", true);
+
+    ts.input.set("bar\nname value\n");
+    ts.exec("setiter -scan -class org.apache.accumulo.core.iterators.ColumnFamilyCounter -p 31", true);
+
+    // TODO can't verify this as config -t fails, functionality verified in ShellServerIT
+
+    ts.exec("deletetable t -f", true, "Table: [t] has been deleted");
+  }
+
   private void make10() throws IOException {
     for (int i = 0; i < 10; i++) {
       ts.exec(String.format("insert row%d cf col%d value", i, i));
diff --git a/test/src/main/java/org/apache/accumulo/test/VolumeIT.java b/test/src/main/java/org/apache/accumulo/test/VolumeIT.java
index f9a6a326..553cb14 100644
--- a/test/src/main/java/org/apache/accumulo/test/VolumeIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/VolumeIT.java
@@ -82,6 +82,11 @@
 
 public class VolumeIT extends ConfigurableMacBase {
 
+  @SuppressWarnings("deprecation")
+  private static final Property INSTANCE_DFS_DIR = Property.INSTANCE_DFS_DIR;
+  @SuppressWarnings("deprecation")
+  private static final Property INSTANCE_DFS_URI = Property.INSTANCE_DFS_URI;
+
   private static final Text EMPTY = new Text();
   private static final Value EMPTY_VALUE = new Value(new byte[] {});
   private File volDirBase;
@@ -92,7 +97,6 @@
     return 5 * 60;
   }
 
-  @SuppressWarnings("deprecation")
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
     File baseDir = cfg.getDir();
@@ -104,8 +108,8 @@
 
     // Run MAC on two locations in the local file system
     URI v1Uri = v1.toUri();
-    cfg.setProperty(Property.INSTANCE_DFS_DIR, v1Uri.getPath());
-    cfg.setProperty(Property.INSTANCE_DFS_URI, v1Uri.getScheme() + v1Uri.getHost());
+    cfg.setProperty(INSTANCE_DFS_DIR, v1Uri.getPath());
+    cfg.setProperty(INSTANCE_DFS_URI, v1Uri.getScheme() + v1Uri.getHost());
     cfg.setProperty(Property.INSTANCE_VOLUMES, v1.toString() + "," + v2.toString());
     cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "15s");
 
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/MapReduceIT.java b/test/src/main/java/org/apache/accumulo/test/functional/MapReduceIT.java
index 8c4666c..3797e5b 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/MapReduceIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/MapReduceIT.java
@@ -21,6 +21,7 @@
 import java.io.IOException;
 import java.security.MessageDigest;
 import java.security.NoSuchAlgorithmException;
+import java.util.Base64;
 import java.util.Collections;
 import java.util.Map.Entry;
 
@@ -37,7 +38,6 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.examples.simple.mapreduce.RowHash;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.hadoop.io.Text;
@@ -83,7 +83,7 @@
     int i = 0;
     for (Entry<Key,Value> entry : s) {
       MessageDigest md = MessageDigest.getInstance("MD5");
-      byte[] check = Base64.encodeBase64(md.digest(("row" + i).getBytes()));
+      byte[] check = Base64.getEncoder().encode(md.digest(("row" + i).getBytes()));
       assertEquals(entry.getValue().toString(), new String(check));
       i++;
     }
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/TabletStateChangeIteratorIT.java b/test/src/main/java/org/apache/accumulo/test/functional/TabletStateChangeIteratorIT.java
index 0cc0b94..bc21123 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/TabletStateChangeIteratorIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/TabletStateChangeIteratorIT.java
@@ -62,7 +62,6 @@
 import org.junit.BeforeClass;
 import org.junit.Test;
 
-import com.google.common.base.Predicate;
 import com.google.common.collect.Sets;
 
 /**
@@ -227,12 +226,7 @@
     @Override
     public Set<String> onlineTables() {
       HashSet<String> onlineTables = new HashSet<>(getConnector().tableOperations().tableIdMap().values());
-      return Sets.filter(onlineTables, new Predicate<String>() {
-        @Override
-        public boolean apply(String tableId) {
-          return Tables.getTableState(getConnector().getInstance(), tableId) == TableState.ONLINE;
-        }
-      });
+      return Sets.filter(onlineTables, tableId -> Tables.getTableState(getConnector().getInstance(), tableId) == TableState.ONLINE);
     }
 
     @Override
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/ZombieTServer.java b/test/src/main/java/org/apache/accumulo/test/functional/ZombieTServer.java
index 6c20cda..14d594a 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/ZombieTServer.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ZombieTServer.java
@@ -103,7 +103,7 @@
 
     TransactionWatcher watcher = new TransactionWatcher();
     final ThriftClientHandler tch = new ThriftClientHandler(context, watcher);
-    Processor<Iface> processor = new Processor<Iface>(tch);
+    Processor<Iface> processor = new Processor<>(tch);
     ServerAddress serverPort = TServerUtils.startTServer(context.getConfiguration(), ThriftServerType.CUSTOM_HS_HA, processor, "ZombieTServer", "walking dead",
         2, 1, 1000, 10 * 1024 * 1024, null, null, -1, HostAndPort.fromParts("0.0.0.0", port));
 
diff --git a/test/src/main/java/org/apache/accumulo/test/master/SuspendedTabletsIT.java b/test/src/main/java/org/apache/accumulo/test/master/SuspendedTabletsIT.java
index de0cf4b..bfd43da 100644
--- a/test/src/main/java/org/apache/accumulo/test/master/SuspendedTabletsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/master/SuspendedTabletsIT.java
@@ -88,15 +88,6 @@
     cfg.setNumTservers(TSERVERS);
   }
 
-  private boolean isAlive(Process p) {
-    try {
-      p.exitValue();
-      return false;
-    } catch (IllegalThreadStateException e) {
-      return true;
-    }
-  }
-
   @Test
   public void crashAndResumeTserver() throws Exception {
     // Run the test body. When we get to the point where we need a tserver to go away, get rid of it via crashing
@@ -146,7 +137,7 @@
           List<ProcessReference> deadProcs = new ArrayList<>();
           for (ProcessReference pr : getCluster().getProcesses().get(ServerType.TABLET_SERVER)) {
             Process p = pr.getProcess();
-            if (!isAlive(p)) {
+            if (!p.isAlive()) {
               deadProcs.add(pr);
             }
           }
diff --git a/test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java b/test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java
index 05a0c54..f392b16 100644
--- a/test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java
+++ b/test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java
@@ -258,7 +258,7 @@
 
     TransactionWatcher watcher = new TransactionWatcher();
     ThriftClientHandler tch = new ThriftClientHandler(new AccumuloServerContext(new ServerConfigurationFactory(HdfsZooInstance.getInstance())), watcher);
-    Processor<Iface> processor = new Processor<Iface>(tch);
+    Processor<Iface> processor = new Processor<>(tch);
     TServerUtils.startTServer(context.getConfiguration(), ThriftServerType.CUSTOM_HS_HA, processor, "NullTServer",
         "null tserver", 2, 1, 1000, 10 * 1024 * 1024, null, null, -1, HostAndPort.fromParts("0.0.0.0", opts.port));
 
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java
index 5f696d0..96d57c4 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java
@@ -16,11 +16,13 @@
  */
 package org.apache.accumulo.test.randomwalk.shard;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.io.BufferedOutputStream;
 import java.io.IOException;
 import java.io.PrintStream;
+import java.util.Base64;
 import java.util.Collection;
 import java.util.List;
 import java.util.Properties;
@@ -34,7 +36,6 @@
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.accumulo.core.util.TextUtil;
 import org.apache.accumulo.test.randomwalk.Environment;
@@ -48,8 +49,6 @@
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.util.ToolRunner;
 
-import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
-
 public class BulkInsert extends Test {
 
   class SeqfileBatchWriter implements BatchWriter {
@@ -174,7 +173,7 @@
 
     Collection<Text> splits = conn.tableOperations().listSplits(tableName, maxSplits);
     for (Text split : splits)
-      out.println(Base64.encodeBase64String(TextUtil.getBytes(split)));
+      out.println(Base64.getEncoder().encodeToString(TextUtil.getBytes(split)));
 
     out.close();
 
diff --git a/test/src/main/java/org/apache/accumulo/test/replication/ReplicationIT.java b/test/src/main/java/org/apache/accumulo/test/replication/ReplicationIT.java
index 11f0634..1b2cc19 100644
--- a/test/src/main/java/org/apache/accumulo/test/replication/ReplicationIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/ReplicationIT.java
@@ -100,7 +100,6 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Function;
 import com.google.common.base.Joiner;
 import com.google.common.collect.HashMultimap;
 import com.google.common.collect.Iterables;
@@ -241,12 +240,6 @@
     boolean foundLocalityGroupDef2 = false;
     boolean foundFormatter = false;
     Joiner j = Joiner.on(",");
-    Function<Text,String> textToString = new Function<Text,String>() {
-      @Override
-      public String apply(Text text) {
-        return text.toString();
-      }
-    };
     for (Entry<String,String> p : tops.getProperties(ReplicationTable.NAME)) {
       String key = p.getKey();
       String val = p.getValue();
@@ -260,10 +253,10 @@
       } else if (key.startsWith(Property.TABLE_LOCALITY_GROUP_PREFIX.getKey())) {
         // look for locality group column family definitions
         if (key.equals(Property.TABLE_LOCALITY_GROUP_PREFIX.getKey() + ReplicationTable.STATUS_LG_NAME)
-            && val.equals(j.join(Iterables.transform(ReplicationTable.STATUS_LG_COLFAMS, textToString)))) {
+            && val.equals(j.join(Iterables.transform(ReplicationTable.STATUS_LG_COLFAMS, text -> text.toString())))) {
           foundLocalityGroupDef1 = true;
         } else if (key.equals(Property.TABLE_LOCALITY_GROUP_PREFIX.getKey() + ReplicationTable.WORK_LG_NAME)
-            && val.equals(j.join(Iterables.transform(ReplicationTable.WORK_LG_COLFAMS, textToString)))) {
+            && val.equals(j.join(Iterables.transform(ReplicationTable.WORK_LG_COLFAMS, text -> text.toString())))) {
           foundLocalityGroupDef2 = true;
         }
       }
diff --git a/test/src/main/java/org/apache/accumulo/test/server/security/SystemCredentialsIT.java b/test/src/main/java/org/apache/accumulo/test/server/security/SystemCredentialsIT.java
index 9752916..150eee4 100644
--- a/test/src/main/java/org/apache/accumulo/test/server/security/SystemCredentialsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/server/security/SystemCredentialsIT.java
@@ -18,7 +18,6 @@
 
 import static org.junit.Assert.assertEquals;
 
-import java.nio.ByteBuffer;
 import java.util.List;
 import java.util.Map.Entry;
 
@@ -32,7 +31,6 @@
 import org.apache.accumulo.core.client.security.SecurityErrorCode;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.metadata.RootTable;
@@ -65,12 +63,6 @@
     if (args[0].equals("bad")) {
       Instance inst = new Instance() {
 
-        @Deprecated
-        @Override
-        public void setConfiguration(AccumuloConfiguration conf) {
-          throw new UnsupportedOperationException();
-        }
-
         @Override
         public int getZooKeepersSessionTimeOut() {
           throw new UnsupportedOperationException();
@@ -106,30 +98,6 @@
           throw new UnsupportedOperationException();
         }
 
-        @Deprecated
-        @Override
-        public Connector getConnector(String user, CharSequence pass) throws AccumuloException, AccumuloSecurityException {
-          throw new UnsupportedOperationException();
-        }
-
-        @Deprecated
-        @Override
-        public Connector getConnector(String user, ByteBuffer pass) throws AccumuloException, AccumuloSecurityException {
-          throw new UnsupportedOperationException();
-        }
-
-        @Deprecated
-        @Override
-        public Connector getConnector(String user, byte[] pass) throws AccumuloException, AccumuloSecurityException {
-          throw new UnsupportedOperationException();
-        }
-
-        @Deprecated
-        @Override
-        public AccumuloConfiguration getConfiguration() {
-          throw new UnsupportedOperationException();
-        }
-
       };
       creds = SystemCredentials.get(inst);
     } else if (args[0].equals("good")) {
@@ -172,36 +140,6 @@
           throw new UnsupportedOperationException();
         }
 
-        @Deprecated
-        @Override
-        public Connector getConnector(String user, CharSequence pass) throws AccumuloException, AccumuloSecurityException {
-          throw new UnsupportedOperationException();
-        }
-
-        @Deprecated
-        @Override
-        public Connector getConnector(String user, ByteBuffer pass) throws AccumuloException, AccumuloSecurityException {
-          throw new UnsupportedOperationException();
-        }
-
-        @Deprecated
-        @Override
-        public Connector getConnector(String user, byte[] pass) throws AccumuloException, AccumuloSecurityException {
-          throw new UnsupportedOperationException();
-        }
-
-        @Deprecated
-        @Override
-        public AccumuloConfiguration getConfiguration() {
-          throw new UnsupportedOperationException();
-        }
-
-        @Deprecated
-        @Override
-        public void setConfiguration(AccumuloConfiguration conf) {
-          throw new UnsupportedOperationException();
-        }
-
       };
       creds = new SystemCredentials(inst, "!SYSTEM", new PasswordToken("fake"));
     } else {
diff --git a/test/src/main/java/org/apache/accumulo/test/util/CertUtils.java b/test/src/main/java/org/apache/accumulo/test/util/CertUtils.java
index 95042d2..5b57f9e 100644
--- a/test/src/main/java/org/apache/accumulo/test/util/CertUtils.java
+++ b/test/src/main/java/org/apache/accumulo/test/util/CertUtils.java
@@ -41,6 +41,7 @@
 import java.util.Map;
 import java.util.Map.Entry;
 import java.util.TreeMap;
+import java.util.function.Predicate;
 
 import org.apache.accumulo.core.cli.Help;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -70,7 +71,6 @@
 
 import com.beust.jcommander.JCommander;
 import com.beust.jcommander.Parameter;
-import com.google.common.base.Predicate;
 
 public class CertUtils {
   private static final Logger log = LoggerFactory.getLogger(CertUtils.class);
@@ -154,7 +154,7 @@
           @Override
           public void getProperties(Map<String,String> props, Predicate<String> filter) {
             for (Entry<String,String> entry : this)
-              if (filter.apply(entry.getKey()))
+              if (filter.test(entry.getKey()))
                 props.put(entry.getKey(), entry.getValue());
           }
         };
diff --git a/test/src/main/java/org/apache/accumulo/test/util/SerializationUtil.java b/test/src/main/java/org/apache/accumulo/test/util/SerializationUtil.java
index bc3c53e..088e9b9 100644
--- a/test/src/main/java/org/apache/accumulo/test/util/SerializationUtil.java
+++ b/test/src/main/java/org/apache/accumulo/test/util/SerializationUtil.java
@@ -16,11 +16,6 @@
  */
 package org.apache.accumulo.test.util;
 
-import org.apache.commons.codec.binary.Base64;
-import org.apache.hadoop.io.Writable;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
 import java.io.ByteArrayInputStream;
 import java.io.ByteArrayOutputStream;
 import java.io.DataInputStream;
@@ -31,8 +26,13 @@
 import java.io.ObjectOutputStream;
 import java.io.OutputStream;
 import java.io.Serializable;
+import java.util.Base64;
 import java.util.Objects;
 
+import org.apache.hadoop.io.Writable;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 /**
  * Partially based from {@link org.apache.commons.lang3.SerializationUtils}.
  *
@@ -69,21 +69,21 @@
 
   public static String serializeWritableBase64(Writable writable) {
     byte[] b = serializeWritable(writable);
-    return org.apache.accumulo.core.util.Base64.encodeBase64String(b);
+    return Base64.getEncoder().encodeToString(b);
   }
 
   public static void deserializeWritableBase64(Writable writable, String str) {
-    byte[] b = Base64.decodeBase64(str);
+    byte[] b = Base64.getDecoder().decode(str);
     deserializeWritable(writable, b);
   }
 
   public static String serializeBase64(Serializable obj) {
     byte[] b = serialize(obj);
-    return org.apache.accumulo.core.util.Base64.encodeBase64String(b);
+    return Base64.getEncoder().encodeToString(b);
   }
 
   public static Object deserializeBase64(String str) {
-    byte[] b = Base64.decodeBase64(str);
+    byte[] b = Base64.getDecoder().decode(str);
     return deserialize(b);
   }
 
diff --git a/test/src/test/java/org/apache/accumulo/test/TraceRepoDeserializationTest.java b/test/src/test/java/org/apache/accumulo/test/TraceRepoDeserializationTest.java
deleted file mode 100644
index 413a6c9..0000000
--- a/test/src/test/java/org/apache/accumulo/test/TraceRepoDeserializationTest.java
+++ /dev/null
@@ -1,49 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.test;
-
-import static org.junit.Assert.fail;
-
-import java.io.ByteArrayInputStream;
-import java.io.InvalidClassException;
-import java.io.ObjectInputStream;
-
-import org.apache.accumulo.core.util.Base64;
-import org.junit.Test;
-
-public class TraceRepoDeserializationTest {
-
-  // Zookeeper data for a merge request
-  static private final String oldValue = "rO0ABXNyAC1vcmcuYXBhY2hlLmFjY3VtdWxvLm1hc3Rlci50YWJsZU9wcy5UcmFjZVJlc"
-      + "G8AAAAAAAAAAQIAAkwABHJlcG90AB9Mb3JnL2FwYWNoZS9hY2N1bXVsby9mYXRlL1Jl" + "cG87TAAFdGluZm90AChMb3JnL2FwYWNoZS9hY2N1bXVsby90cmFjZS90aHJpZnQvVEl"
-      + "uZm87eHBzcgAwb3JnLmFwYWNoZS5hY2N1bXVsby5tYXN0ZXIudGFibGVPcHMuVGFibG" + "VSYW5nZU9wAAAAAAAAAAECAAVbAAZlbmRSb3d0AAJbQkwAC25hbWVzcGFjZUlkdAAST"
-      + "GphdmEvbGFuZy9TdHJpbmc7TAACb3B0AD1Mb3JnL2FwYWNoZS9hY2N1bXVsby9zZXJ2" + "ZXIvbWFzdGVyL3N0YXRlL01lcmdlSW5mbyRPcGVyYXRpb247WwAIc3RhcnRSb3dxAH4A"
-      + "BUwAB3RhYmxlSWRxAH4ABnhyAC5vcmcuYXBhY2hlLmFjY3VtdWxvLm1hc3Rlci50YWJs" + "ZU9wcy5NYXN0ZXJSZXBvAAAAAAAAAAECAAB4cHVyAAJbQqzzF/gGCFTgAgAAeHAAAAAA"
-      + "dAAIK2RlZmF1bHR+cgA7b3JnLmFwYWNoZS5hY2N1bXVsby5zZXJ2ZXIubWFzdGVyLnN0" + "YXRlLk1lcmdlSW5mbyRPcGVyYXRpb24AAAAAAAAAABIAAHhyAA5qYXZhLmxhbmcuRW51"
-      + "bQAAAAAAAAAAEgAAeHB0AAVNRVJHRXEAfgALdAABMnNyACZvcmcuYXBhY2hlLmFjY3Vt" + "dWxvLnRyYWNlLnRocmlmdC5USW5mb79UcL31bhZ9AwADQgAQX19pc3NldF9iaXRmaWVs"
-      + "ZEoACHBhcmVudElkSgAHdHJhY2VJZHhwdwUWABYAAHg=";
-
-  @Test(expected = InvalidClassException.class)
-  public void test() throws Exception {
-    byte bytes[] = Base64.decodeBase64(oldValue);
-    ByteArrayInputStream bais = new ByteArrayInputStream(bytes);
-    ObjectInputStream ois = new ObjectInputStream(bais);
-    ois.readObject();
-    fail("did not throw exception");
-  }
-
-}
diff --git a/test/src/test/java/org/apache/accumulo/test/iterator/AgeOffFilterTest.java b/test/src/test/java/org/apache/accumulo/test/iterator/AgeOffFilterTest.java
index e78d8a9..9d7d821 100644
--- a/test/src/test/java/org/apache/accumulo/test/iterator/AgeOffFilterTest.java
+++ b/test/src/test/java/org/apache/accumulo/test/iterator/AgeOffFilterTest.java
@@ -16,8 +16,6 @@
  */
 package org.apache.accumulo.test.iterator;
 
-import static org.junit.Assert.assertNotNull;
-
 import java.util.List;
 import java.util.Map.Entry;
 import java.util.TreeMap;
@@ -34,7 +32,6 @@
 import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
 import org.junit.runners.Parameterized.Parameters;
 
-import com.google.common.base.Predicate;
 import com.google.common.collect.Iterables;
 
 /**
@@ -103,15 +100,7 @@
   private static TreeMap<Key,Value> createOutputData() {
     TreeMap<Key,Value> data = new TreeMap<>();
 
-    Iterable<Entry<Key,Value>> filtered = Iterables.filter(data.entrySet(), new Predicate<Entry<Key,Value>>() {
-
-      @Override
-      public boolean apply(Entry<Key,Value> input) {
-        assertNotNull(input);
-        return NOW - input.getKey().getTimestamp() > TTL;
-      }
-
-    });
+    Iterable<Entry<Key,Value>> filtered = Iterables.filter(data.entrySet(), input -> NOW - input.getKey().getTimestamp() > TTL);
 
     for (Entry<Key,Value> entry : filtered) {
       data.put(entry.getKey(), entry.getValue());
diff --git a/test/src/test/java/org/apache/accumulo/test/iterator/CfCqSliceFilterTest.java b/test/src/test/java/org/apache/accumulo/test/iterator/CfCqSliceFilterTest.java
index fc0f672..0e4c517 100644
--- a/test/src/test/java/org/apache/accumulo/test/iterator/CfCqSliceFilterTest.java
+++ b/test/src/test/java/org/apache/accumulo/test/iterator/CfCqSliceFilterTest.java
@@ -16,8 +16,6 @@
  */
 package org.apache.accumulo.test.iterator;
 
-import static org.junit.Assert.assertNotNull;
-
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map.Entry;
@@ -35,7 +33,6 @@
 import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
 import org.junit.runners.Parameterized.Parameters;
 
-import com.google.common.base.Predicate;
 import com.google.common.collect.Iterables;
 
 /**
@@ -92,16 +89,10 @@
   private static TreeMap<Key,Value> createOutputData() {
     TreeMap<Key,Value> data = new TreeMap<>();
 
-    Iterable<Entry<Key,Value>> filtered = Iterables.filter(INPUT_DATA.entrySet(), new Predicate<Entry<Key,Value>>() {
-
-      @Override
-      public boolean apply(Entry<Key,Value> entry) {
-        assertNotNull(entry);
-        String cf = entry.getKey().getColumnFamily().toString();
-        String cq = entry.getKey().getColumnQualifier().toString();
-        return MIN_CF.compareTo(cf) <= 0 && MAX_CF.compareTo(cf) >= 0 && MIN_CQ.compareTo(cq) <= 0 && MAX_CQ.compareTo(cq) >= 0;
-      }
-
+    Iterable<Entry<Key,Value>> filtered = Iterables.filter(INPUT_DATA.entrySet(), entry -> {
+      String cf = entry.getKey().getColumnFamily().toString();
+      String cq = entry.getKey().getColumnQualifier().toString();
+      return MIN_CF.compareTo(cf) <= 0 && MAX_CF.compareTo(cf) >= 0 && MIN_CQ.compareTo(cq) <= 0 && MAX_CQ.compareTo(cq) >= 0;
     });
 
     for (Entry<Key,Value> entry : filtered) {
diff --git a/shell/src/test/resources/shelltest.txt b/test/src/test/resources/shelltest.txt
similarity index 100%
rename from shell/src/test/resources/shelltest.txt
rename to test/src/test/resources/shelltest.txt
diff --git a/trace/.gitignore b/trace/.gitignore
deleted file mode 100644
index e77a822..0000000
--- a/trace/.gitignore
+++ /dev/null
@@ -1,28 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# Maven ignores
-/target/
-
-# IDE ignores
-/.settings/
-/.project
-/.classpath
-/.pydevproject
-/.idea
-/*.iml
-/nbproject/
-/nbactions.xml
-/nb-configuration.xml
diff --git a/trace/pom.xml b/trace/pom.xml
deleted file mode 100644
index d8c5ef4..0000000
--- a/trace/pom.xml
+++ /dev/null
@@ -1,43 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.accumulo</groupId>
-    <artifactId>accumulo-project</artifactId>
-    <version>1.8.0-SNAPSHOT</version>
-  </parent>
-  <artifactId>accumulo-trace</artifactId>
-  <name>Apache Accumulo Trace</name>
-  <description>A distributed tracing library for Apache Accumulo.</description>
-  <dependencies>
-    <dependency>
-      <groupId>org.apache.accumulo</groupId>
-      <artifactId>accumulo-core</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.htrace</groupId>
-      <artifactId>htrace-core</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>junit</groupId>
-      <artifactId>junit</artifactId>
-      <scope>test</scope>
-    </dependency>
-  </dependencies>
-</project>
diff --git a/trace/src/main/findbugs/exclude-filter.xml b/trace/src/main/findbugs/exclude-filter.xml
deleted file mode 100644
index 6ce4bed..0000000
--- a/trace/src/main/findbugs/exclude-filter.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<FindBugsFilter>
-  <Match>
-    <!-- ignore intentional name shadowing -->
-    <Or>
-      <Package name="org.apache.accumulo.trace.instrument" />
-      <Package name="org.apache.accumulo.trace.thrift" />
-    </Or>
-    <Or>
-      <Bug code="NM" pattern="NM_SAME_SIMPLE_NAME_AS_SUPERCLASS" />
-      <Bug code="NM" pattern="NM_SAME_SIMPLE_NAME_AS_INTERFACE" />
-    </Or>
-  </Match>
-</FindBugsFilter>
diff --git a/trace/src/main/java/org/apache/accumulo/trace/instrument/CloudtraceSpan.java b/trace/src/main/java/org/apache/accumulo/trace/instrument/CloudtraceSpan.java
deleted file mode 100644
index 7110134..0000000
--- a/trace/src/main/java/org/apache/accumulo/trace/instrument/CloudtraceSpan.java
+++ /dev/null
@@ -1,72 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.trace.instrument;
-
-import java.util.Map;
-
-/**
- * @deprecated only used for ensuring backwards compatibility
- */
-@Deprecated
-public interface CloudtraceSpan {
-  static final long ROOT_SPAN_ID = 0;
-
-  /** Begin gathering timing information */
-  void start();
-
-  /** The block has completed, stop the clock */
-  void stop();
-
-  /** Get the start time, in milliseconds */
-  long getStartTimeMillis();
-
-  /** Get the stop time, in milliseconds */
-  long getStopTimeMillis();
-
-  /** Return the total amount of time elapsed since start was called, if running, or difference between stop and start */
-  long accumulatedMillis();
-
-  /** Has the span been started and not yet stopped? */
-  boolean running();
-
-  /** Return a textual description of this span */
-  String description();
-
-  /** A pseudo-unique (random) number assigned to this span instance */
-  long spanId();
-
-  /** The parent span: returns null if this is the root span */
-  Span parent();
-
-  /** A pseudo-unique (random) number assigned to the trace associated with this span */
-  long traceId();
-
-  /** Create a child span of this span with the given description */
-  Span child(String description);
-
-  @Override
-  String toString();
-
-  /** Return the pseudo-unique (random) number of the parent span, returns ROOT_SPAN_ID if this is the root span */
-  long parentId();
-
-  /** Add data associated with this span */
-  void data(String key, String value);
-
-  /** Get data associated with this span (read only) */
-  Map<String,String> getData();
-}
diff --git a/trace/src/main/java/org/apache/accumulo/trace/instrument/CountSampler.java b/trace/src/main/java/org/apache/accumulo/trace/instrument/CountSampler.java
deleted file mode 100644
index ad3acfc..0000000
--- a/trace/src/main/java/org/apache/accumulo/trace/instrument/CountSampler.java
+++ /dev/null
@@ -1,36 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.trace.instrument;
-
-import org.apache.htrace.HTraceConfiguration;
-
-import java.util.Collections;
-
-/**
- * @deprecated since 1.7, use org.apache.htrace.impl.CountSampler instead
- */
-@Deprecated
-public class CountSampler extends org.apache.htrace.impl.CountSampler implements Sampler {
-  public CountSampler(long frequency) {
-    super(HTraceConfiguration.fromMap(Collections.singletonMap(CountSampler.SAMPLER_FREQUENCY_CONF_KEY, Long.toString(frequency))));
-  }
-
-  @Override
-  public boolean next() {
-    return super.next(null);
-  }
-}
diff --git a/trace/src/main/java/org/apache/accumulo/trace/instrument/Sampler.java b/trace/src/main/java/org/apache/accumulo/trace/instrument/Sampler.java
deleted file mode 100644
index 11208b2..0000000
--- a/trace/src/main/java/org/apache/accumulo/trace/instrument/Sampler.java
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.trace.instrument;
-
-/**
- * @deprecated since 1.7, use org.apache.htrace.Sampler instead
- */
-@Deprecated
-public interface Sampler extends org.apache.htrace.Sampler<Object> {
-
-  boolean next();
-
-}
diff --git a/trace/src/main/java/org/apache/accumulo/trace/instrument/Span.java b/trace/src/main/java/org/apache/accumulo/trace/instrument/Span.java
deleted file mode 100644
index 8e70e33..0000000
--- a/trace/src/main/java/org/apache/accumulo/trace/instrument/Span.java
+++ /dev/null
@@ -1,103 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.trace.instrument;
-
-import static java.nio.charset.StandardCharsets.UTF_8;
-
-import java.util.HashMap;
-import java.util.Map;
-import java.util.Map.Entry;
-
-/**
- * @deprecated since 1.7, use {@link org.apache.accumulo.core.trace.Span} instead
- */
-@Deprecated
-public class Span extends org.apache.accumulo.core.trace.Span implements CloudtraceSpan {
-  public static final long ROOT_SPAN_ID = org.apache.htrace.Span.ROOT_SPAN_ID;
-
-  public Span(org.apache.accumulo.core.trace.Span span) {
-    super(span.getScope());
-  }
-
-  public Span(org.apache.htrace.TraceScope scope) {
-    super(scope);
-  }
-
-  public Span(org.apache.htrace.Span span) {
-    super(span);
-  }
-
-  @Override
-  public Span child(String s) {
-    return new Span(span.child(s));
-  }
-
-  @Override
-  public void start() {
-    throw new UnsupportedOperationException("can't start span");
-  }
-
-  @Override
-  public long getStartTimeMillis() {
-    return span.getStartTimeMillis();
-  }
-
-  @Override
-  public long getStopTimeMillis() {
-    return span.getStopTimeMillis();
-  }
-
-  @Override
-  public long accumulatedMillis() {
-    return span.getAccumulatedMillis();
-  }
-
-  @Override
-  public boolean running() {
-    return span.isRunning();
-  }
-
-  @Override
-  public String description() {
-    return span.getDescription();
-  }
-
-  @Override
-  public long spanId() {
-    return span.getSpanId();
-  }
-
-  @Override
-  public Span parent() {
-    throw new UnsupportedOperationException("can't get parent");
-  }
-
-  @Override
-  public long parentId() {
-    return span.getParentId();
-  }
-
-  @Override
-  public Map<String,String> getData() {
-    Map<byte[],byte[]> data = span.getKVAnnotations();
-    HashMap<String,String> stringData = new HashMap<>();
-    for (Entry<byte[],byte[]> d : data.entrySet()) {
-      stringData.put(new String(d.getKey(), UTF_8), new String(d.getValue(), UTF_8));
-    }
-    return stringData;
-  }
-}
diff --git a/trace/src/main/java/org/apache/accumulo/trace/instrument/Trace.java b/trace/src/main/java/org/apache/accumulo/trace/instrument/Trace.java
deleted file mode 100644
index 027fe8f..0000000
--- a/trace/src/main/java/org/apache/accumulo/trace/instrument/Trace.java
+++ /dev/null
@@ -1,86 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.trace.instrument;
-
-/**
- * @deprecated since 1.7, use {@link org.apache.accumulo.core.trace.Trace} instead
- */
-@Deprecated
-public class Trace extends org.apache.accumulo.core.trace.Trace {
-  // Initiate tracing if it isn't already started
-  public static Span on(String description) {
-    return new Span(org.apache.accumulo.core.trace.Trace.on(description));
-  }
-
-  // Turn tracing off:
-  public static void off() {
-    org.apache.accumulo.core.trace.Trace.off();
-  }
-
-  public static void offNoFlush() {
-    org.apache.accumulo.core.trace.Trace.offNoFlush();
-  }
-
-  // Are we presently tracing?
-  public static boolean isTracing() {
-    return org.apache.accumulo.core.trace.Trace.isTracing();
-  }
-
-  // If we are tracing, return the current span, else null
-  public static Span currentTrace() {
-    return new Span(org.apache.htrace.Trace.currentSpan());
-  }
-
-  // Create a new time span, if tracing is on
-  public static Span start(String description) {
-    return new Span(org.apache.accumulo.core.trace.Trace.start(description));
-  }
-
-  // Start a trace in the current thread from information passed via RPC
-  public static Span trace(org.apache.accumulo.trace.thrift.TInfo info, String description) {
-    return new Span(org.apache.accumulo.core.trace.Trace.trace(info, description));
-  }
-
-  // Initiate a trace in this thread, starting now
-  public static Span startThread(Span parent, String description) {
-    return new Span(org.apache.htrace.Trace.startSpan(description, parent.getSpan()));
-  }
-
-  // Stop a trace in this thread, starting now
-  public static void endThread(Span span) {
-    if (span != null) {
-      span.stop();
-      // close() will no-op, but ensure safety if the implementation changes
-      org.apache.htrace.Tracer.getInstance().continueSpan(null).close();
-    }
-  }
-
-  // Wrap the runnable in a new span, if tracing
-  public static Runnable wrap(Runnable runnable) {
-    return org.apache.accumulo.core.trace.Trace.wrap(runnable);
-  }
-
-  // Wrap all calls to the given object with spans
-  public static <T> T wrapAll(T instance) {
-    return org.apache.accumulo.core.trace.Trace.wrapAll(instance);
-  }
-
-  // Sample trace all calls to the given object
-  public static <T> T wrapAll(T instance, Sampler dist) {
-    return org.apache.accumulo.core.trace.Trace.wrapAll(instance, dist);
-  }
-}
diff --git a/trace/src/main/java/org/apache/accumulo/trace/instrument/Tracer.java b/trace/src/main/java/org/apache/accumulo/trace/instrument/Tracer.java
deleted file mode 100644
index d57ee81..0000000
--- a/trace/src/main/java/org/apache/accumulo/trace/instrument/Tracer.java
+++ /dev/null
@@ -1,29 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.trace.instrument;
-
-/**
- * ACCUMULO-3738: Temporary fix to keep Hive working with all versions of Accumulo without extra burden on users. Hive referenced this class in the build-up of
- * the classpath used to compute the answer to a query. Without this class, the accumulo-trace jar would not make it onto the classpath which would break the
- * query. Whenever Hive can get this patch into their build and a sufficient number of releases pass, we can remove this class.
- *
- * Accumulo should not reference this class at all. It is solely here for Hive integration.
- */
-@Deprecated
-public class Tracer {
-
-}
diff --git a/trace/src/main/java/org/apache/accumulo/trace/thrift/TInfo.java b/trace/src/main/java/org/apache/accumulo/trace/thrift/TInfo.java
deleted file mode 100644
index f6f5756..0000000
--- a/trace/src/main/java/org/apache/accumulo/trace/thrift/TInfo.java
+++ /dev/null
@@ -1,24 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.trace.thrift;
-
-/**
- * @deprecated since 1.7, use {@link org.apache.accumulo.core.trace.thrift.TInfo} instead
- */
-@SuppressWarnings("serial")
-@Deprecated
-public class TInfo extends org.apache.accumulo.core.trace.thrift.TInfo {}